*Matt Strassler [April 27, 2012]*

In my article on energy and mass and related issues, I focused attention on particles — which are ripples in fields — and the equation that Einstein used to relate their energy, momentum and mass. But energy arises in other places, not just through particles. To really understand the universe and how it works, you have to understand that energy can arise **in the interaction among different fields**, or even in the interaction of a field with itself. All the structure of our world — protons, atoms, molecules, bodies, mountains, planets, stars, galaxies — arises out of this type of energy. In fact, many types of energy that we talk about as though they are really different — chemical energy, nuclear energy, electromagnetic energy — either are a form of or involve in some way this more general concept of **interaction energy**.

*In beginning physics classes this type of energy includes what is called “potential energy”. But both because “potential” has a different meaning in English than it does in physics, and because the way the concept is explained in freshman physics classes is so different from the modern viewpoint, I prefer to use a different name here, to pull the notion away from any pre-conceptions or mis-conceptions that you might already hold.*

*Also, in a previous version of my mass and energy article I called “interaction energy” by a different name, “relationship energy”. You’ll see why below; but I’ve decided this is a bad idea and have switched over to the new name.*

**Preamble: Review of Concepts**

In the current viewpoint favored by physicists and validated* (i.e. shown to be not false, but not necessarily unique)* in many experiments, the world is made from fields.

The most intuitive example of a field is the wind:

- you can measure it everywhere,
- it can be zero or non-zero, and
- it can have waves (which we call sound.)

Most fields can have waves in them, and those waves have the property, because of quantum mechanics, that they cannot be of arbitrarily small height.

- The wave of smallest possible height — of smallest amplitude, and of smallest intensity — is what we call a “quantum”, or more commonly, but in a way that invites confusion, a “particle.”

A photon is a quantum, or particle, of light (and here the term `light’ includes both visible light and other forms); it is the dimmest flash of light, the least intense wave in the electric and magnetic fields that you can create without having no flash at all. You can have two photons, or three, or sixty-two; you cannot have a third of a photon, or two and a half. Your eye is structured to account for this; it absorbs light one photon at a time.

The same is true of electrons, muons, quarks, W particles, the Higgs particle and all the others. They are all quanta of their respective fields.

A quantum, though a ripple in a field, is like a particle in that

- it retains its integrity as it moves through empty space
- it has a definite (though observer-dependent) energy and momentum
- it has a definite (and observer-independent) mass
- it can only be emitted or absorbed as a unit.

*[Recall how I define mass according to the convention used by particle physicists; E = mc ^{2} only for a particle at rest, while a particle that is moving has E > mc^{2}, with mass-energy mc^{2} and motion-energy which is always positive. My particle physics colleagues and I do not subscribe to the point of view that it is useful to view mass as increasing with velocity; we view this definition of mass as archaic. We define mass as velocity-independent — what people used to call “rest mass”, we just call “mass”. I’ll explain why elsewhere, but it is very important to keep this convention in mind while reading the present article.]*

**The Energy of Interacting Fields**

Now, with that preamble, I want to turn to the most subtle form of energy. A particle has energy through its mass and through its motion. And remember that a particle is a ripple in a field — a well-defined wave.

Fields can do many other things, not just ripple. For example, a ripple in one field can cause a non-ripple disturbance in another field with which it interacts. I have sketched this in Figure 1, where in blue you see a particle (i.e. a quantum) in *one* field, and in green you see the response of a *second* field.

Suppose now there are two particles — for clarity only, let’s make them ripples in two different fields, so I’ve shown one in blue and one in orange in Figure 2 — and both of those fields interact with the field shown in green. Then the disturbance in the green field can be somewhat more complicated. Again, this is a sketch, not a precise rendition of what is a bit too complicated to show clearly in a picture, but it gives the right idea.

Ok, so what is the energy of this system of two particles — two ripples, one in each of two different fields — and a third field with which both interact?

The ripples are quanta, or particles; they each have mass and motion energy, both of which are **positive**.

The green field’s disturbance has some energy too; it’s also **positive**, though often quite small compared to the energy of the particles in a case like this. That’s often called *field energy*.

But there is additional energy in the relationship between the various fields; where the blue and green fields are both large, there is energy, and where the green and orange fields are both large, there is also energy. And here’s the strange part. If you compare Figures 1 and 2, both of them have energy in the region where the blue and green fields are large. But the presence of the ripple in the orange field in the vicinity alters the green field, and therefore *changes the energy in the region where the blue field’s ripple is sitting,* as indicated in Figure 3.

Depending upon the details of how the orange and green fields interact with each other, and how the blue and green fields interact with each other, the change in the energy may be either **positive or negative**. This change is what I’m going to call * interaction energy*.

The possibility of negative shifts in the energy of the blue and green field’s interaction, due to the presence of the orange ripple (and vice versa) — **the possibility that interaction energy can be negative** — is the single most important fact that allows for all of the structure in the universe, from atomic nuclei to human bodies to galaxies. And that’s what comes next in this story.

**The Earth and the Moon**

The Earth is obviously not a particle; it is a huge collection of particles, ripples in many different fields. But what I’ve just said applies to multiple ripples, not just one, and they all interact with gravitational fields. So the argument, in the end, is identical.

Imagine the Earth on its own. Its presence creates a disturbance in the gravitational field *(which in Einstein’s viewpoint is a distortion of the local space and time, but that detail isn’t crucial to what I’m telling you here.)* Now put the Moon nearby. The gravitational field is also disturbed by the Moon. And the gravitational field near the Earth changes as a result of the presence of the Moon. The detailed way that gravity interacts with the particles and fields that make up the Earth assures that * the effect of the Moon is to produce a negative interaction energy between the gravitational field and the Earth. *The reverse is also true.

And this is why the Moon and Earth cannot fly apart, and instead remain trapped, bound together as surely as if they were attached with a giant cord. Because** if the Moon were very, very far from the Earth**, the interaction energy of the system — of the Earth, the Moon, and the gravitational field — would be

**zero**, instead of

**negative**. But energy is conserved. So to move the Moon far from the Earth compared to where it is right now,

**positive**energy — a whole lot of it — would have to come from somewhere, to allow for the negative interaction energy to become zero. The Moon and Earth have positive motion-energy as they orbit each other, but not enough for them to escape each other.

Short of flinging another planet into the moon, there’s no viable way to get that kind of energy, accidentally or on purpose, from anywhere in the vicinity; all of humanity’s weapons together wouldn’t even come remotely close. So the Moon cannot spontaneously move away from the Earth; it is stuck here, in orbit, unless and until some spectacular calamity jars it loose.

*You may know that the most popular theory of how the Earth and Moon formed is through the collision of two planet-sized objects, a larger pre-Earth and a Mars-sized object; this theory explains a lot of otherwise confusing puzzles about the Moon. Certainly there were very high-energy planet-scale collisions in the early solar system as the sun and planets formed over four billion years ago! But such collisions haven’t happened for a long, long, long time.*

The same logic explains why artificial satellites remain bound to the Earth, why the Earth remains bound to the Sun, and why the Sun remains bound to the Milky Way Galaxy, the city of a trillion stars which we inhabit.

**The Hydrogen Atom**

And on a much smaller scale, and with more subtle consequences, the electron and proton that make up a hydrogen atom remain bound to each other, unless energy is put in from outside to change it. This time the field that does the main part of the job is the ** electric** field. In the presence of the electron, the interaction energy between the electric field and the proton (and vice versa) is

**negative**. The result is that once you form a hydrogen atom from an electron and a proton

*(and you wait for a tiny fraction of a second until they settle down to their preferred configuration, know as the “ground state”)*the amount of energy that you would need to put in to separate them is about 14 electron-volts.

*(What’s an electron-volt? it’s a quantity of energy, very very small by human standards, but useful in atomic physics.)*We call this the “binding energy” of hydrogen.

We can measure that the binding energy is -14 electron-volts by shining ultraviolet light *(photons with energy a bit too large to be detected by your eyes)* onto hydrogen atoms, and seeing how energetic the photons need to be in order to break hydrogen apart. We can also calculate it using the equations of quantum mechanics — and the success of this prediction is one of the easiest tests of the modern theory of quantum physics.

But now I want to bring you back to something I said in my mass and energy article, one of Einstein’s key insights that he obtained from working out the consequences of his equations. If you have a system of objects, the mass of the system is not the sum of the masses of the objects that it contains. It is not even proportional to the sum of the energies of the particles that it contains. It is **the total energy of the system divided by c ^{2}**,

*as viewed by an observer who is stationary relative to the system*. (For an observer relative to whom the system is moving, the system will have additional motion-energy, which does not contribute to the system’s mass.) And that total energy involves

- the mass energies of the particles (ripples in the fields), plus
- the motion-energies of the particles, plus
- other sources of field-energy from non-ripple disturbances, plus
- the interaction energies among the fields.

What do we learn from the fact that the energy required to break apart hydrogen is 14 electron volts? Well, once you’ve broken the hydrogen atom apart you’re basically left with **a proton and an electron that are far apart and not moving much**. At that point, the energy of the system is

- the mass energies of the particles = electron mass-energy + proton mass-energy = 510, 999 electron-volts + 938,272,013 electron-volts
- the motion-energies of the particles = 0
- other sources of field-energy from non-ripple disturbances = 0
- the interaction energies among the fields = 0

Meanwhile, we know that before we broke it up, the system of a hydrogen atom had energy that was 14 electron volts less than this.

Now the mass-energy of an electron is **always** 510, 999 electron-volts and the mass-energy of a proton is **always** 938,272,013 electron-volts, no matter what they are doing, so *the mass-energy contribution to the total energy is the same for hydrogen as it is for a widely separated electron and proton*. What must be the case is that

- the motion-energies of the particles inside hydrogen
- PLUS other sources of field-energy from non-ripple disturbances (really really small here)
- PLUS the interaction energies among the fields
**MUST EQUAL the binding energy of -14 electron volts.**

In fact, if you do the calculation, the way the numbers work out is (approximately)

- the motion-energies of the particles = +14 electron volts
- other sources of field-energy from non-ripple disturbances = really really small
- the interaction energies among the fields = -28 electron volts.

and the sum of these things is -14 electron volts.

*It’s not an accident that the interaction energy is -2 times the motion energy; roughly, that comes from having a 1/r ^{2} force law for electrical forces. Experts: it follows from the virial theorem.*

* What is the mass of a hydrogen atom, then?* It is

- the electron mass + the proton mass + (binding energy/c
^{2})

and since the binding energy is negative, thanks to the big negative interaction energy,

- m
_{hydrogen}< m_{proton}+ m_{electron}

This is among the most important facts in the universe!

**Why the hydrogen atom does not decay**

I’m now going to say these same words back to you in a slightly different language, the language of a particle physicist.

**Hydrogen is a stable composite object made from a proton and an electron, bound together by interacting with the electric field.**

Why is it *stable?*

Any object that is not stable will decay; and * a decay is only possible if the sum of the masses of the particles to which the initial object decays is less than the mass of the original object.* This follows from the conservation of energy and momentum; for an explanation, click here.

The minimal things to which a hydrogen atom could decay are a proton and an electron. But the mass of the hydrogen atom is smaller (because of that minus 14 electron volts of binding energy) than the mass of the electron plus the mass of the proton, let me restate that really important equation.

- m
_{hydrogen}< m_{proton}+ m_{electron}

There is nothing else in particle physics to which hydrogen can decay, so we’re done: **hydrogen cannot decay at all**.

*[This is true until and unless the proton itself decays, which may in fact be possible but spectacularly rare — so rare that we’ve never actually detected it happening. We already know it is so rare that not a single proton in your body will decay during your lifetime. So let’s set that possibility aside as irrelevant for the moment.]*

The same argument applies to all atoms. Atoms are stable because the interaction energy between electrons and an atomic nucleus is negative; the mass of the atom is consequently less than the sum of the masses of its constituents, so therefore the atom cannot simply fall apart into the electrons and nucleus that make it up.

The one caveat: the atom can fall apart in another way if its nucleus can itself decay. And while a proton cannot decay *(or perhaps does so, but extraordinarily rarely)*, this is not true of most nuclei.

And this brings us to the **big questions**.

- Why is the neutron, which is unstable when on its own, stable inside some atomic nuclei?
- Why are some atomic nuclei stable while others are not?
- Why is the proton stable when it is heavier than the quarks that it contains?

To be continued…

With you so far and pleased to see the wind-field analogy here. I was the giant black rabbit in the front row during your talk in Terra Nova on Saturday but I had to give up at the point you started talking about wind as a field because the sound feed kept breaking up. I am now on the edge of my seat to discover why the neutron is not stable on its own…

will post the on-line link to the lecture shortly.

… or rather why it IS stable in the nucleus (your linked article on conservation of energy and momentum explains the on-its-own neutron instability)….

This is a great article from great MATT. , but please do not forget to tell us how non-material abstract statistics control / direct / confine the exact half-life time of all decaying nucleus so that the lump never deviate from that strict value , how the nucleus know that it cannot decay or it must decay or else half-life time will change !! this dilemma was mentioned in arther caostler book ( the roots of coincidence) but no answer was given.

GOD bless you matt.

As the minus sign must be related to a fixed agreed-upon datum , what is the datum with which comparing binding energy we find that it is less / opposite / the other way around / ……etc.from that datum ?

Dear Professor,

I’m visualizing these things pretty clearly, thank you. I’m ready for the next lesson!

Just how heretical is the notion of omnipresent fields? Is space time usefully thought about as a field? If so, it is the only field with broken Lorentz symmetry?

Not heretical in the slightest. It’s standard fare; every university in the country with a particle physics program has a Quantum Field Theory course.

Space-time isn’t quite the field itself — this is a tricky point. That’s why I glossed over it. The fields in gravity are a bit more complicated than that. But I don’t think I want to try to answer this clearly now.

In flat and unchanging space, gravity doesn’t break Lorentz symmmetry at all.

Of course, the universe is expanding, and that defines a preferred sense of time (in the part of the universe that we can see, at least). And that does mean that the gravitational fields in our universe do, on the largest distance scales, break Lorentz symmetry, yes. And no other fields break Lorentz symmetries on the large scale, no.

But the answer to your last question depends upon exactly what you meant. Globally across the universe, nothing but gravity breaks Lorentz symmetry (at least no one has ever detected of any other source of Lorentz breaking.) In small regions there’s all sorts of breaking by all sorts of things. For instance, there are stray magnetic fields in all sorts of places around the universe, and so locally those break Lorentz symmetry. And hey, even the earth breaks Lorentz symmetry (that’s why up is different from down, for instance, when you’re near the earth.)

Is attraction by definition a negative energy ? what is its physical meaning ? for ex. in hydraulics we call a negative potential energy of dammed water if we chose the datum line above water surface so water-head is BELOW IT.

attraction results from the fact that the negative interaction energy becomes

more and more negativeas you bring the earth and moon closer and closer together.Q: Why is moon moving away from the earth when it should be getting closer and closer to it if gravity holds the two bodies. I know its said that in early Earth’s history, moon was much closer to Earth. Shouldn’t larger body win the tag of war? Sorry for being sightly off the topic but I’m still sorting out the data presented.

“The detailed way that gravity interacts with the particles and fields that make up the Earth assures that the effect of the Moon is to produce a negative interaction energy between the gravitational field and the Earth.”

Huh ?

I just thought this results from looking at Einstein`s classical field equations without taking the detailed interactions between quantum fields into account … ?

didn’t you ever wonder where Einstein’s classical field equations come from, given that we live in a quantum world and the earth and moon are made from things that are described by quantum mechanics?

But in any case, that’s not very relevant — because everything I said is also true of the classical field equations; I just used a quantum language to describe it, but the math is essentially identical.

Thank you so much for finally getting around to this topic

I keep my promises; it just can take a while as I figure out how to do it.

“interaction among different fields, or even in the interaction of a field with itself”

I wrote a paper to show our errors in guessing the interaction energy (interaction Lagrangian), see here https://docs.google.com/open?id=0B4Db4rFq72mLcTRlakNVM1pSdm8

Glad to se the virial theorem at work on smaller scales than clusters. For us non-experts, the 1/r potential case is done in Wikipedia (for gravity).

“In the current viewpoint favored by physicists and validated (i.e. shown to be not false, but not necessarily unique) in many experiments,”

The meaning of validation is seldom stated clearly, this was a refreshing brief!

“Your eye is structured to account for this; it absorbs light one photon at a time.”

Allow me a more detailed model – when I was doing my PhD work we were preparing a book on sensors (which was never to be) and I made the biological photon sensor chapter. Mind that this is several decades old biology and from the top of my head:

– It is only at low light intensities (dark adapted) that the eye is counting photons. Not with much quantum efficiency mind, but that is because our eye evolved to be lousy. (For example, cats doubles dark adaption efficiency by having layered index tissue mirrors beneath the neural layers – cat’s eyes.)

– At moderate intensities rods and cones absorb several photons faster than they can transmit nerve impulses.

Our eyes have evolved to regulate response by many mechanisms, from non-linear response in the photochemical receptors instantaneously and over time (bleaching, part of light vs dark adaption) over regulation of cellular cascades resulting in neural signal and of pigment refresh response to bleaching (both part of light vs dark adaption) to neural tissue feedbacks and feed-forwards (probably also part of light vs dark adaptation).

The idea is correct, for some reason or other pigments absorbing one photon at a time have been utilized in biology many times over. But it is not always (in fact, seldom) the function of the eye.

Forgive me for being so dumb , but am i right in saying that the minus sign is ONLY in equations not reality and what you mean is that part of motion energy is transformed to binding energy so that part IS TAKEN from the motion energy so here we see the ( -) sign OF PART TAKEN FROM WHOLE with no negative energy in reality??????

You are right that this is an accounting issue. What I really mean is that the energy of the system has decreased, and it is useful to account for that decrease in a particular way, through what I call interaction energy. In the end the total system has total energy which is positive.

Negative energy, eh? I guess this is where the notion of the Theory of Nothingness came from? We are not even close are we? :-)

Is gravity the fundamental field of the vacuum?

Is the interaction (potential) energy the energy required to start (“release”) the ripple from vacuum. I can visualize the mass-energy of the ripple as being the defining level for that specific particle and the potential energy as the energy requires to keep pushing it in the positive direction (spacetime).

Are you confusing the vacuum reference potential with a zero datum?

Would quantum tunneling disprove negative energy?

The notion that the universe is a zero-energy object with positive energy in particles and fields and negative energy in overall interaction energy between gravity and fields is not a new one.

Gravity is not the fundamental field of the vacuum; indeed your question doesn’t mean anything, because the vacuum is empty space, but all fields of a universe are to be found in empty space.

“Is the interaction (potential) energy the energy required to start (“release”) the ripple from vacuum.” No. Particles are created through interactions of particles with each other and with fields. But the reason there are particles to start with is that many of them were created in the Big Bang (we don’t know precisely how, because there are many possibilities) and once they start banging into each other and the universe becomes very hot, you can make many more of them. The ones that were left over could coalesce into galaxies, form dark matter halos, stream out as cosmic microwave background, etc. In collisions among these particles, new particles can be created; e.g., fusion inside a star.

I don’t know what you mean by a “vacuum reference potential” and a “zero datum”, so I assume I am not confusing them.

Quantum tunneling is a fact (http://en.wikipedia.org/wiki/Scanning_tunneling_microscope ) and so is negative energy (atoms, nuclei, stars, planets, satellites,…) so there’s nothing to your last question, at least, not as you stated it.

I will try an tie my line of questioning to make my point. I don’t believe there is “empty” space. I believe exothermic reaction of the Big Bang created space. Expansion of this energy over the same space it created began “clumping” into variable densities and hence the first field. I call it a field because there would have been a very symmetrical pattern over the “space” as the space expanded. As the expansion continued thermal gradients would begin to become more pronounced to a point where rotations would begin. This rotations, energy flows turning backwards at some radius to define the speed of light, i.e. it is this radius which is the first constraint of this universe. As mentioned in an other post, confinement would take the shape of a sphere and the smallest spheres of the “space” is what I call the vacuum potential. Hence, the interaction of adjacent sphere would then create the second field, gravity. The mechanism as I posted before would be a Newton’s cradle in reverse. The repeating collapsing and generation of the spheres would create an attraction force while the thermal cavities between the spheres are the ‘ripples” we perceive and formulate with the Lagrangian equations.

I know that there are more and more physicists are giving up on a simple unified theory, (zero energy?, hologram?, strings? … amazing how fast one can lose himself/herself in the math, lol), but I am willing to bet on Einstein’s initial instincts and look for a nice simple solution.

I know you jump on the use of the “exothermic reaction” and yes I am inferring that this bubble we are living in is one of maybe infinite bubbles that percolate up (down?) and coalesce into the magnificent universe we see. In this context I will ask again for your intuitive opinion since your knowledge base is so advanced, what is E, the left side of the equation? In one of your response to my post you described it as temperature and that we don’t know where and/or why there was such a high temperature at the Big Bang. Could you answer it by assuming it sipped in through a ruptured space-time of another manifold? Could this E be an entity of one temperature, I refuse to use string, particle, and I don’t know if space at one temperature, (temperature quanta/) makes any sense either.

I’m afraid I have no idea what you’re trying to suggest.

How can Nature create a microscopic spherical tornado spinning at the speed of light (v = c), from nothing?

E = ( h / 2 x pi ) x ( c / r )

E = h-bar x ( L / T) / L

E ~ 1 / T (?!)

In 1932 Dirac offered a precise definition and derivation of the time-energy uncertainty relation in a relativistic quantum theory of “events”. But a better-known, more widely used formulation of the time-energy uncertainty principle was given in 1945 by L. I. Mandelshtam and I. E. Tamm, as follows. For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator , the following formula holds:

σE x ( σB / | d

/ dt | >= h-barwhere σE is the standard deviation of the energy operator in the state ψ, σB stands for the standard deviation of B. Although, the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters Schrödinger equation. It is a lifetime of the state ψ with respect to the observable B. In other words, this is the time after which the expectation valuechanges appreciably.Observation: This simple but elegant relationship tells me that there is a unified theory as simple and elegant as Mandelshtam’s and Tamm’s interpretation of the time-energy uncertainty. One interpretation is at the initial state the energy was almost infinite ( E ~ 1 / T ), however another interpretation is that time (existence) started with a spark, an almost infinite infusion of energy ( T ~ 1 / E ).

So my question to today’s theoretical physicists is the time variable in the uncertainty relationship and the time variable in Schrödinger equation the same?PS, Sorry some of text did show up … correction:

σE x ( σB / | d

/ dt | >= h-barwhere σE is the standard deviation of the energy operator in the state ψ, σB stands for the standard deviation of B. Although, the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters Schrödinger equation. It is a lifetime of the state ψ with respect to the observable B. In other words, this is the time after which the expectation value,changes appreciably.Oops, again the observable {B} is missing… try again.

σE x ( σB / | d {B} / dt | >= h-bar

… It is a lifetime of the state ψ with respect to the observable B. In other words, this is the time after which the expectation value, {B} changes appreciably.

I am caught a bit off-guard by the small inconsistency with regard to mass. For a single particle, you view “mass” as not including motion-energy, but for a system of particles, you *do* consider motion-energy to be part of the mass. I know it’s just a definition, but it still causes me to stumble a bit.

Ah!!! You are right to be caught off guard — I left out an important statement in the text. The motion energy of the particles INSIDE the system is part of the system’s mass, but the energy from the SYSTEM’S OVERALL MOTION is not to be included. Thank you for pointing out this error of omission — I will fix it immediately.

I expected that was what was meant. But I really should have asked two separate questions. When viewing a single particle in motion, you say that it has “mass” plus “motion-energy” which is not considered to be part of the particle’s mass. But when viewing a system of particles (to be clear, let’s consider ourselves at rest with respect to the center of mass of the system), you say that the internal motion-energy of the particles now *does* count as mass.

If you try to move the system as a whole, yes, you will discover that the system satisfies the equation

E^2 – p^2 c^2 = (M c^2)^2

where E is the total energy of the moving system, p is its momentum, and m, the mass of the system, includes internal motions of the particles that it contains. This is the same equation that a particle of mass M would satisfy.

This is verified in great detail in a few cases… and it is essential in explaining why neutrons are stable inside of nuclei. It also follows from the modern (post-1920s? I’m not sure…) theoretical understanding of what special relativity means and how it works mathematically.

I don’t wish to appear to be flogging a deceased equine; I quite agree with what you are saying, and I hope you understand that I am using “you” as a group-inclusive of physicists in general and not in the singular sense. I’m not disagreeing; I just find the applied definitions a bit curious.

It is interesting that, when viewing an individual moving particle, you say that it has mass-energy and motion-energy, the latter not counting as mass, by definition. But then if you zoom out the microscope and observe that the particle in question is part of a system of particles (with respect to which we are at rest), then the motion-energy of that same particle *does* count as mass of the system, because it contributes to the inertia of the system.

But, alternatively, the motion-energy of the individual particle likewise contributes to the inertia of the particle itself, hence the original distinction of “rest mass” vs. “relativistic mass”.

I prefer the new point of view, actually — I just need to get used to thinking that way.

There’s a whole story of mathematics that lies behind the preferences that particle physicists take here. E^2 – p^2 c^2 = (m c^2)^2 is a relationship between two things that are observer-dependent and one thing that isn’t. It’s like the equation for a circle: x^2 + y^2 = r^2 , where r is the radius of the circle and x and y are coordinates; x and y depend on how you draw your coordinate axes, but the radius of the circle doesn’t care. So we define mass for a single particle to be an observer-independent quantity.

Next, the goal is to define that quantity which is observer-independent for a *system*. And as Einstein showed in one of his early papers on relativity, there was only one consistent answer — the one I gave you.

If this weren’t true, then for a particle (such as a proton) that later, after further experiments, turns out to be a system of many particles (such as the system of quarks and antiquarks and gluons that make up a proton) there’d be an inconsistency in what you’d mean by its mass if you treated it as a particle versus what you’d mean if you viewed it as a composite system. That wouldn’t make any sense.

Nice! To be read & enjoyed more than once.

I would be interested in the inversion of your last question: not “Why is the proton stable when it is heavier than the quarks that it contains?” but “Why is the proton heavier than the quarks it contains?” given the naive idea that interaction energy as described seems to be typically negative. (I think I know the rough shape of the answer but you would carve it elegantly.)

I was careful to say that interaction energy can be positive or negative. We’ll see how this plays out when I write the article.

What is the causal mechanism responsible to convert part of protons+neutrons masses to binding energy ?

Is it a rule /principle of Q.M. that must be ” obeyed” for which no more explanation exists ?

Is it a given property of EMF and gluon fields interaction ?

Does equations describe it or explain it ?

I wouldn’t say you’re converting the proton and neutron masses to binding energy; notice I did not say the atom’s binding energy comes from the electron and proton masses. I said it comes from interaction energy involving the electron, proton and the electric field.

For a nucleus, it is actually a complicated process, but it does arise from the interaction energy involving quarks and gluons in a not entirely dissimilar way. Because the effect is complicated, our equations are less reliable than for atoms, and it is harder to predict the interaction energy for all nuclei. Nuclear physicists are pretty good at it — but it isn’t simple.

So is it correct to say that the interactions system among the quarks ripples and fields plus the EMF plus the gluon field result in pumping mass from the system converting it to binding energy ?

There’s no pumping going on. You’re looking for a deeper explanation of the deep thing itself; the deep thing is that the interaction itself changes the energy of the system. Period. It’s not taking energy from something else.

m all protons + m all neutrons – binding energy –in the nucleus — exactly = extracting the last factor from the first and second ones……am i correct ? binding energy must come from somewhere , interactions are energy users not energy generators……….or else i am totally confused.

Yes, you are confused on this point — interactions are not things that require energy to occur — they do not use energy, the way an engine does. Nor do they produce energy the way a power plant does. The interactions themselves simply occur. Energy (possibly positive, and possibly negative) is present as a

resultof interactions taking place — but the interaction is not mechanicallyproducingit, or using it.People often ask where the energy of the big bang came from. i get the feeling from what ur saying that energy isnt anything fundamental.. its not a thing by itself…just a conserved quantity and that this question isnt meaningful. I take it is meaningful to ask where the laws and fields came from however…a i making sense?

Hmm.

I’m far from sure we know what the question is yet. Often in physics (and in other areas of scientific research) the key is to figure out what the right question is. Sometimes, by the time you do that, you already see the answer.

So I would say: regarding the right questions to ask about the universe as a whole, I don’t know that anyone yet knows what they are.

P.S. :

So you mean that the interactions REDISTRIBUTE the overall system of mass/energy ?……are the fields designed to do this ? ie. it is a rule , a principal , a fundamental one ?

This is indeed very fundamental to quantum field theory. I’m trying to think about how I can answer this — whether it has an answer that is meaningful, or whether I just have to say: “this is what fields do”.

Remember that I explained that energy is (according to Emmy Noether, http://profmattstrassler.com/2012/03/27/the-mathematician-you-havent-heard-of-but-physicists-have/ ) that quantity which is conserved (i.e. does not change with time) because the laws of nature are constant in time. Operationally, one first writes down equations that describe fields that interact with each other. Second, one asks, using Noether’s theorem, “what is the energy of the system of fields”? One finds there is energy that is associated with the interaction of the fields, though the amount depends in detail on what the fields are doing.

So I think you’re imagining energy comes first, and then you put fields in it. But no, you start with fields and the laws which govern their interactions, and then you ask: what is the conserved quantity associated with this system of fields and interactions? Not the other way round.

P.S. 2 :

You cannot say that B is a result of A but B is not produced by A unless A is designed so that its mere existence is always accompanied with B.

I’m trying to address what I think your confusion is; I might be misinterpreting it.

If A and B are sufficiently intertwined it can become impossible to state, in words, how they are related. In equations this would be very easy.

I do not agree that “the Moon and Earth cannot fly apart, and instead remain trapped”. There is one additional piece of energy you didn’t consider – the rotational energy of the Earth and Month. The Earth’s rotation pulls the Month apart by a few inches per century and in a couple of billions of years the Earth will loose the Month.

However, it is realy difficult to imagine, how this tidal forces pulls the objects apart.

This is something that I left out, yes. I should probably supplement the article to explain it. As you say, it is tricky.

Actually, I think your statement isn’t correct, or at least there is serious debate.

http://www.eurekalert.org/pub_releases/1996-07/UoA-TRTM-050796.php

“Given its present-day rate of retreat, the moon eventually would reach synchronous orbit with Earth in about 15 billion years, Zakharian said in an interview. In synchronous orbit, the moon and Earth would orbit together as planet and satellite in fixed position, locked face-to-face, about 560,000 kilometers (336,000 miles) apart. The moon now is about 384,000 kilometers (240,000 miles) away.”

There are also statements in the literature that if the sun warms and the earth’s oceans boil away, the retreating due to tides will slow.

But there seems to be more debate about the precise rates of tidal losses than I realized. Something to learn more about. I also hadn’t realized that there was actual data that gave information on tides and the moon’s location over geologic time scales.

Also, your numbers are wrong. At a few inches per century, it would be seven million years to retreat a mile (about 70,000 inches) and that would mean seven billion years would mean a retreat of only a thousand miles (out of 250,000 already).

Instead the correct number is 3.8 centimeters (about 1.5 inches) per year.

http://harvey-craft.suite101.com/the-moon-is-receding-from-the-earth-a200780

http://eclipse.gsfc.nasa.gov/SEhelp/ApolloLaser.html

That means that in ten billion years the moon would retreat 100,000 miles — still not enough to escape the earth. And once the earth and moon are tidally locked, no more tides and no more orbit changes. Of course the sun will probably have expanded, and swallowed the earth and moon, well before then.

THANKS MATT. : I very much accept that……….i mean this is the way our world is designed….this is the way every thing is connected to every thing , some times we have to take it as given.

It is a great honor to have a dialogue with such a nice expert as your good self.

Hi Matt !

At the genesis of the universe, when all the fundamental fields were concentrated into a singularity, then presumably the negative energy would have been of infinite intensity. As the universe came into being and the fundamental fields expanded, then the negative energy within the fields would have dissipated to a point at which it was replaced by mass and motion energy , allowing the formation of stable structures. Would this be a fair summation ?

Well — first of all, we don’t know about the very earliest periods of the universe. We don’t know there was a singularity (and indeed, since a singularity by definition is a place where our equations break down, there’s no reason to think the equations we have right now actually work there.) So if I tried to answer your questions I’d be speculating wildly. Not that this stops theoretical cosmologists — it’s their job to speculate. But we do not know why the universe became hot and populated with ordinary matter (and presumably dark matter.)

Also, DO NOT visualize the Big Bang as an explosion from a point. That is wrong. The Big Bang is an expansion of space, not an explosion into pre-existing space. But what it looked like when it began to expand was not a point — it may have been infinite to start with, or it may have been a region within a much more complicated pre-existing space-time, or its features may not have been interpretable as space at all. We certainly do not know the universe’s extreme past.

Now is it possible that one day we may discover that fields are ONLY our mathematical representation of what we observe with nice match , but the MOST fundamental ingredient of the world is something we never imagined ? related to this ; is our knowledge as per NOW can confirm that fields ARE the MOST primary ingredient with ultimate final confidence ?

Absolutely it is possible; it is even likely. Science does not provide final confidence; it provides tools for predictions and for consistent thinking. Those tools are always potentially subject to update with new information. The only thing we know is that those updates will preserve successful predictions of the past, as Einstein’s revisions of the laws of motion cleverly maintained all of Newton’s predictions.

Hi Matt,

Are the disturbances in the field (well behaved or not) changes (fluctuations) in the value of the field? The following questions assume the answer to this one to be “Yes”. But even if it’s “No” you might still be able to see what’s confusing in my mind.

Is the quantum limitation a property of the fields (i.e the change in the value of the field cannot be smaller than a quanta)?

Does the energy of a general disturbance in the field (not a particle) have the same components as the particles (mass energy and motion energy)?

Does a field have energy?

Can two fields interact (or have a relationship) in a different way than the one described in the article (i.e a disturbance in one field generates a disturbance in the second field)? Is there energy just because two fields that can iteract have large values (in the same region I suspect) without the need of any disturbance?

So — delayed answer:

Disturbances in the field do involve changes in the value of the field, yes. They aren’t changes in the average value of the field over all of space, but rather localized changes. (Caution: this is

quantumfield theory, so just as we can’t simultaneously know the position and velocity of a particle, we cannot simultaneously know the value of a field and how it is changing… )The statement that ripples in fields are quantized, however, is NOT a statement that a change in the value of the field cannot be smaller than a quantum. The value of the field can change continuously. It is the statement that a RIPPLE (an up-and-down change that resembles a ripple on a pond, in that the average change in the value of the field is *zero*) cannot have arbitrarily small height (i.e. `amplitude’).

For a given field, its ripples (i.e. its particles) all have the same mass. (Small caveat if the particles are very short-lived, but let’s ignore that for the moment.) The electron is a ripple in the electron field; all electrons have the same mass. General disturbances can have any mass. You can, if you want, think of them as having mass-energy and motion-energy. It’s not as useful as for particles, because (unlike a particle) these disturbances tend to fall apart right away (even faster than most unstable particles do.) So they don’t tend to move very far, and if they bounce off of something they will typically emerge with a changed mass — very different from electrons, which can travel macroscopic distances and retain their mass even if they bounce off of something.

Fields do have generally have energy, yes; if they are changing over time or over space, they always do.

Yes, it is possible for two fields that are non-zero but not disturbed to have energy due to their interaction. An example would be if there are two types of Higgs fields in nature rather than one; the average values of the two Higgs fields in nature will be determined by the requirement that their energy of interaction with each other and with themselves be minimized.

About quantum units being dependent upon the “grid lines” of the field in which the are derived, is it conceivable that at an earlier stage of the universe’s evolution, these grid lines and quantum units were of a grander scale or at least of a different scale than known to us today experimentally? Consider time dilation for a mass, defined by spacetime’s absence or field knotting, traveling at near light speed relative to our frame of reference. Although there may or may not be a quantal unit of time, time’s arrow is a relativistic constant within it’s frame of reference. Time’s arrow accomodates the frame of reference. Furthermore, there is a fractal nature to the expansion of the universe – as well as a fluid nature with boundary partitions – some fields extend WITHOUT a time component – accounting for quantal entanglement. Some mysteries remain eternal! My point: Is it conceivable that the scale of a quantal unit is dependent upon the scale of the field’s “grid lines” from which it is derived, so that the fields from which particles are formed have evolved and rescaled simultaneously with that of cosmic evolution. My answer to myself: “Yes.”

Sorry, I have no idea what you are talking about. Please define “quantum units” and “grid lines”; most fields could not have “grid lines”, by any definition I can think of, so I don’t know what you’re talking about. And “quantum units” is a non-standard and ambiguous term.

Of all isotopes, iron-56 has the lowest mass per nucleon. With 8.8 MeV binding energy per nucleon, iron-56 is one of the most tightly bound nuclei. How would you explain this stability? What is it about the nuclear geometry of fields allowable by this number of protons and neutrons to account for this energy of mass defect? I have been intrigued by knot theory within-between fields so I have been exploring for representative phenomena.

The details of nuclear physics are well understood but very complicated, unfortunately; explaining the binding energies of nuclei is not something that can be done with a short answer.

See for example http://mightylib.mit.edu/Course%20Materials/22.101/Fall%202004/Notes/Part4.pdf

Hi Matt,

Have I asked my questions in a wrong way? I was wondering what is a ripple in a field. Is that an oscillation of the value (the property of the field you said can be measured everywhere) of the field?

Thank you,

Călin

you correct pavel by being out by a factor of 100 from this site , but the site itself is also out by a factor of 100 I believe with this statement

‘Laser pulses are aimed at the reflector from Earth and bounce quickly back at 3 million meters per second – that’s about 186,000 miles per second, so the round trip takes less than three seconds’

http://harvey-craft.suite101.com/the-moon-is-receding-from-the-earth-a200780#ixzz1tkqyXafG

also it says there

“As an interesting sidelight, there will be no full moon in February of 2010. There will be fewer in the short month as time goes by because the Moon will take longer to orbit the Earth as it spirals away”

really ?

the slow rate of the moon drifting away is more than compensated by the slowing rotation of the earth, making lunar months longer in scientific seconds, but shorter in days.

it would seem that if we have anough patience, there will never be a february without a full moon.

duncan

That’s why I gave two sites; any given site can either be wrong or have a typo. Of course if a typo gets copied things are correlated. But still — you’re not implying that the 2 cm/year number was wrong, are you?

Instead you just mean that that website should say

300million meters per second. 186,000 miles per second (and three seconds for the round trip) is correct. Looks like a simple typo there.Your point about February is more substantive and seems correct, because isn’t the end result of tidal friction (except for one subtlety) that the earth day and lunar month become equal? So the day becomes very long, the moon maintains a fixed position in the sky, and the moon is new and full every day; some parts of the earth see the moon all the time and some never see it. And I’m not even sure we can talk about February; how many days will there be in a year, by then?

The subtlety is that the earth’s oceans may boil away, due to the sun becoming hotter, long before that happens, vastly reducing tidal friction and pushing this end-point off for I have no idea how long, but certainly longer than the earth is likely to survive the sun’s expansion as it reaches old age.

Prof. Strassler,

Just to be clear: we are restricting ourselves to a flat Minkowski spacetime, are we not? The whole concept of “energy” and its conservation becomes rather problematic in the curved spacetime of general relativity unless some univeral Killing feld is imposed (which violates the general covariance requirement of general relativity). When both time and space can “bend” depending on spacetime’s contents and on the motion of mass-energy-stress through it, the symmetries required for a meaningful definition of energy as a conserved quantity aren’t present.

Edit: That should be “… unless some universal Killing field is imposed …”

What you say is true and not true. We are approximating the curved space picture by assuming only the time-time component of the metric is curved, and representing that as the gravitational field.

Any effects that go beyond this approximation are minuscule.

Physicists always make useful approximations in order to capture the physical phenomena to the extent possible.

You are rejecting the approximation in which energy can be used because it isn’t

exactlyaccurate. You say we should use Einstein’s general relativity to do this correctly. And you talk about the curvature of spacetime instead.But in that case, shouldn’t you worry about the fact that general relativity is also wrong, because it doesn’t account for the fact that the earth and moon’s particles should be described using quantum mechanics?

And in fact, that’s not enough, because spacetime itself is quantum mechanical at very short scales.

In other words, your picture is incomplete too.

One of the hardest things to learn in physics is when subtle effects matter and when they don’t. You’re so focused on getting things exactly right that (a) you have forgotten that you don’t have them exactly right either, and (b) there is a physics point which you are making much more confusing than it needs to be in the process.

First let’s make sure we understand energy and why things are bound together; then we can try to understand how, in some circumstances (but not this one), general relativity forces us to account for the fact that this is not a sufficiently good approximation.

According to general relativity, the Earth and the Moon are not feeling any “force” of gravity. They are both traveling in geodesic orbits around their common center of mass — i.e., they are in free-fall along geodesic paths that are curved due to the curvature of spacetime: both space and time are curved as a result of the presence of their mass-energy-stress within the spacetime. In GR, a body moves along a geodesic (not along a straight line) unless affected by an outside force. There is no Newtonian “gravitational field”, just the dynamic metric of spacetime. So your earth-moon diagram is roughly correct, according to GR, if you substitute “4-dimensional manifold with a semi-Riemannian metric that varies according to Einsteins equation” for “gravitational field”. But that is a quite significant detail, and very different from how you view the problem, and approach it mathematically, in Minkowski spacetime.

Most of what you say here is wrong… you have Einstein correct, but you have not understood that what I said is also consistent with Einstein.

First, I did not say the words “gravitational force” in my article. Nor did I say “Newtonian field”. You put words in my mouth — so why are you criticizing me for using them?

You are right there is no

Newtoniangravitational field — however, you are wrong beyond that point. The metric IS associated with theEinsteiniangravitational fields — and in particular, in situations where you have two slowly moving, weakly gravitating objects, the only component of the metric which is significantly different from flat space is the time-time component, and the only components of the Einsteinian gravitational fields which are significantly different from zero are those that are derived from the time-time component of the metric. See Weinberg’s book on the weak-gravity limit. (You are perhaps not familiar with the field language, but it works just fine.)The approximation I am making is that the other components of the gravitational field are very small — an approximation whose limitations can be measured with precise techniques, but which is accurate enough that everything I said about binding, and binding energy, gives the correct result.

Just as we should not waste our time worrying about the quantum mechanical corrections to the earth-moon system, we should not worry about the components of the Einsteinian gravitational fields that are so small that they do not affect the dynamics of the earth-moon system.

A slight correction: the force that the Earth and Moon do feel is a tidal force: Because the curvature of spacetime in which they travel is not uniform, the paths that some parts of these bodies travel is slightly different from the paths that their neighboring regions are trying to travel. This tends to try to pull them apart. But because they are semi-rigid bodies, these sheer forces are of course resisted by the electromagnetic forces holding them together, so the motion of the body as a whole is affected. And that is why the Earth’s rotation is slowing down and the moon’s orbital velocity speeding up (causing its orbit to expand) – due to the tidal forces induced by differential curvature of the spacetime which they inhabit.

This is fine, but it is a lot like worrying about the fine points of grammar when you’re trying to communicate to people that they need to abandon ship.

Because spacetime in GR is curved, there is no general definition of parallel vectors, nor parallel transport. In most spacetimes in general relativity, there can be no global family of inertial observers. That is, spacetime in GR is Lorentz _covariant_ only locally, not globally. Although energy at a point (or in a sufficiently local region where spacetime curvature is negligible) can be defined, in general, an observer cannot know the energy at an arbitrary distant point. And if that local energy is unbounded from below, or sufficiently negative, spacetime itself becomes unstable.

So I was surprised by your use of the Earth-Moon/gravitational system to illustrate a rather semi-classical mechanics view of energy. You seem to have crossed The Line That Should Not Be Crossed – conflating quantum mechanics based on Hamiltonians and Minkowski spacetime with gravitation based on Einstein’s equations and curved spacetime. As with your goal of correcting the common misrepresentation of “particles”, shouldn’t we be careful to use the most accurate, up-to-date description that we have – currently, still Einstein’s General Relativity — while being very explicit as to its limitations?

Oh, come on. This is ridiculous. Please stop talking to me as though I’m an idiot.

The most up-to-date description of gravity would treat the earth and the moon as quantum mechanical systems. What’s your argument for not doing that?

Are you seriously suggesting that comparing hydrogen to the earth-moon system is so completely wrong that absolutely nothing useful can be learned by doing so? And that it would be better to leave people so confused about hydrogen (AND the earth-moon system) that they cannot understand why structure forms in the universe?

If so, I advise you to run your own website, and explain things your own way.

Hi Matt,

I think I know why you’re not answering my questions. I sincerely apologize for my (childish) behavior.

Călin

Calin — the reason I haven’t answered is that your first question was tough to answer without a long reply, and I set it aside. Then I forgot about the restatement. Let me try to get back to it. I’ve had a lot of comments (and a lot of work too) in the last few days.

p.s. Now I’ve answered it.1)so the moon moves 3.8cm per year away or 3.8 metres per century which is about 12 meters extra travelling distance and 12 milli-secs per century for its orbit as the moon travel about 1000 meters/sec

But the solar day becomes 1.7 ms longer every century and so a month will last about 51 ms longer /per century. if you can put the 2 figures together then a lunar month should be about about 39 ms shorter per century(51-12).

for the moon always to be visible in february, it would have to lose 37 hours so that it would only be 28 days long. as there are about 3,400,000 sets of 39ms in 37 hours, it should take 340,000,000 years until you can be assured of a visible moon in february with the day being 25.6 hours and 342 days in a year.

(2)

your link says

“Given its present-day rate of retreat, the moon eventually would reach synchronous orbit with Earth in about 15 billion years, Zakharian said in an interview. In synchronous orbit, the moon and Earth would orbit together as planet and satellite in fixed position, locked face-to-face, about 560,000 kilometers (336,000 miles) apart. The moon now is about 384,000 kilometers (240,000 miles) away.”

15 billion times 3.8cm =570,000 km but it only has to travel another 176,000 km to get into synchronous orbit which is about 4.5 billion years.

(3)in 15 billion years the earth should take another 71 hours to rotate daily (@1.7 ms longer every century )for a total of 95 hours in a day when the moon reaches synchronous orbit with Earth . currently the moon takes 708 hours to orbit the earth. is that what happens as the moon gets into synchronous orbit, it gets a rapid increase in speed ?

If it’s not too late to follow up on this discussion of tidal locking, let me mention that it was the topic of a summer project I did as an undergraduate, more years ago than I care to admit. My job was just to make educational animations, but I had to learn something about the related physics as well (at the Newtonian level, of course). My employer seems to have abandoned his Web site before adding this project, so I went ahead and resurrected it here. There may well be errors, but I hope the animations and accompanying discussion and references will prove educational.

In discussions like (1) and (3), we need to carefully distinguish sidereal time and synodic time. A sidereal month is the amount of time it takes for the moon to orbit once around the Earth, with respect to the background stars, currently 656 hours. A synodic (lunar) month is the amount of time between successive new moons from the perspective of an observer on the earth, currently 708 hours. Similarly, a synodic (solar) day is the time from one sunrise to the next, while a sidereal day is the amount of time it takes for the earth to rotate about its axis, with respect to the background stars. (Of course there are further caveats, refinements, and so on, which are unimportant here.)

In the synchronous orbit expected in the far future, a sidereal day will last as long as a sidereal month (the number I have in that old project is about 1130 hours), the same point on the Moon will always face the Earth (this is already true today), and the same point on the Earth will always face the Moon. Whether or not an observer sees the moon in February or any other month would depend on where that observer were located on the earth: part of the planet would always see the moon, the rest would never see it.

Regarding (2), as the moon moves farther away from the earth, the rate of this motion away from the earth decreases. And (re: 3) as the radius (semi-major axis) of the Moon’s orbit increases, its angular velocity also decreases, as described by Kepler’s third law.

Hi Matt

Let’s say I superglue two strong magnets together with the south poles touching. This object has some positive interaction energy in addition to just the masses of the original magnets and the glue; so, if I take a very accurate scale, and weigh this object against a similar object but with a south pole glued to the north pole, the first object would actually be heavier?

Yes.

David Schaich |

liked your website. does it have the actual mathematical equations though ?

Hey Matt,

A physics newbie here, trying to wrap my mind around interaction (“potential”) energy and total system energy. I think I understand the gist of what you say in this (great) article about the relationship between mass, interaction energy, and inter-system relationships. I do get that interaction energy is essentially what defines the energetic boundary of a system that keeps it from fragmenting into separate, basically isolated systems (is that the right way of saying it?). But I am still somewhat confused, as I try to explain below.

This post gets a bit longer than I meant it to, but I’m not sure how to get my thoughts across any more succinctly, so my appologies for that. =P

In my current reference texts, I am being introduced to the idea of attractive forces as having negative interaction energy, such that two isolated systems starting from rest and given even a small attractive force between them, over an infinite distance, will show a net kinetic-interactive energy sum of 0, even as the interaction energy decreases infinitely, and the kinetic energy increases infinitely. As the separation distance approaches infinity, the interaction energy approaches zero (as in the case of gravity). And by further reasoning, as the separation distance approaches zero, then the potential energy approaches negative infinity. But my mind trips over the accounting of it, I guess you could say? I am just not quite clear on why the interaction energy is negative, even as I understand the reasoning that leads to the conclusion that the interaction and kinetic energy, summed, must = 0; since both isolated systems started with only their rest energies.

I find it mentally far more clear to restate the situation. Because while we are going from two isloated systems to a compound, two-object system, we are “injecting” this new factor, the separation distance, into the behaviour of the system. And wherever there is separation distance, and a force that can act over that distance, there is interaction energy, at least as far as I understand it. So the interaction energy of a system with an infinite separation distance, but some acting attractive force (however small), is in fact *infinite* by this reasoning (even as it is somehow -> 0, which is in fact an increase from infinitely more negative states…). And as separation distance approaches 0, so does interaction energy (just as when one approaches infinity, so does the other). No separation distance, no distance for any attractive force to perform internal work over.

So if you have two massive particles that start at rest, *with a separation distance between them*, and assume these particles have no electric/contact forces (only gravity is present), then you basically get a gravitational oscillator, where – relative to one particle – the second particle oscillates through the first along one linear path, up to a maximum distance equalling the initial separation distance. Also, the kinetic energy = 0 at either extreme of the oscillation, since the attractive force has been working across that distance as it moves away from the other particle the entire time, while the interaction energy = 0 at the single instant where both systems occupy the same space, since particle has been accelerating that entire time (kinetic energy = total energy – rest energy), and there is now no distance between the two particles whatsoever. In no instant in this system is the interaction energy ever 0, and the sum of the kinetic and interaction energies of the system is always a constant.

Indeed, as far as I can tell, the only reason the interaction energy becomes negative at all is because we define the ‘zero point’ of potential energy to be some point where the separation distance IS NOT zero. And I’m not clear as to why this is a useful assumption.

Anyway, having said all that, I am unclear as to how to reconcile this “gravitational oscillator” perspective – where separation distance and interaction energy are both always non-negative, and increase with each other – and the case in which interaction energy and kinetic energy start at 0 from two isolated systems, and then the former decreases without bound as the latter increases without bound. (It is worth noting that the ‘escape energy’ of this sort of system, from the perspective I describe, would be a point at which the potential energy suddenly starts dropping to zero as the attractive forces move ‘out of range’, and the systems become effectively isolated, even as the particle we deem to be ‘moving’ relative to the other retains a nonzero kinetic energy. At least, that’s as far as I can reason it.)

Hopefully this reasoning isn’t just a giant jumble, and any insight you could perhaps provide would be greatly appreciated. These relationships are just not quite coming together in my head in the way they have been presented to me so far.

Thanks again,

– Chris

Chris — your confusions are very natural and common, and your reasoning (I admit I didn’t go through every detail) seems sound. As you say, it is an option, when dealing with energy in classical freshman-level physics, to set the zero of energy wherever we like, and either perspective you outline is allowed.

Experience, however, will teach you that setting the energy at zero is a more consistent thing to do. For instance, suppose, that you set the zero of energy so that comet #1 has zero interaction (i.e. “potential”) energy at its closest approach to the sun and positive energy further away. Well, now if comet #2 has an orbit that brings it closer to the sun, it will have negative potential energy. So what you’d have to do to describe a whole solar system using only positive interaction energy would be to find the comet with the closest approach to the sun, and set the zero of potential energy there. Or even more appropriately, put the zero of potential energy at the dead center of the sun (where it is finite because the sun is a spread-out sphere.) But you see: to describe this system’s energy you need to know many of its details. This is not convenient, and it is very system-dependent; add one more comet, or make the sun a little more or less dense because of its evolution over time, and you may wish you’d set the point of zero potential energy differently.

In contrast, if you always set the zero of potential energy at infinite separation, this is system-independent, and always works (as long as you’re not dealing with a significant fraction of the universe, or something else that renders ordinary classical physics insufficient.) You don’t need to know anything about what’s in the system to do this. And that’s why, with experience, you’ll see this is by far the best choice. The alternatives work in specific situations, but they don’t lead to a useful and general theoretical picture.

Thanks Matt; I almost hope you didn’t go through it all, rambling as it was! Anyway, I was increasingly sure that negative interaction energy had to be a conscious, yet arbitrary choice on the part of the physics community for some reason of conceptual simplicity, but for whatever reason my text (“Matter and Interactions” for the curious, which has worked very well for me so far short of this exception) just didn’t go into the reasoning that lead to that choice in any detail, and I was having difficulty finding other good articles explaining the justification. Your explanation helps clarify that, so thank you very much.

I’m not sure I completely grasp your points as to your example of the solar system, though:

I do get what you are saying about the interaction energy becoming negative if, in this situation, we set the zero point at any arbitrary distance from the center of the sun, with respect to comet orbits or anything else. The same sort of approach may be applied to objects on Earth’s surface with respect to the surface of the Earth, below which they cannot move, even as the force of gravity still pulls upon them from Earth’s center of mass. So I think I get that.

If I understand correctly, you’re saying that – assuming a positive interaction energy perspective – if we place the zero of the solar system’s interaction energy at the dead center of the Sun, this basically makes the most physical sense (and this is basically the approach I outlined above re; gravitational oscillator), since that is basically the point to which the gravitational force is always trying to pull all solar objects.

You say that we need to know many details of this system; can you give me one or two examples? I can see that we need to know the Sun’s radius, so that we know the closest any object can ever get to the zero-point at the center of the sun. But I’m not clear what other factors would be critically important to bear in mind. One thought that occurs is that while this approach is simple when we assume the Sun is stationary, it becomes far more complicated if our frame of reference sees the sun moving around with respect to us, which basically means the zero-point of the solar interaction energy is wandering around too…

You also say that adding one more comet, or changing the density of the Sun, can affect the way we interpret the interaction energy for objects within this system, based on this approach. I am presuming this is because such changes would influence the center of mass of the solar system, and thus the point to which objects are trying to gravitate, and so ‘moves’ our zero-point. Is this reasoning correct?

Thanks again for humouring my questions; your assistance in understanding these ideas is very much appreciated. =)

No, it doesn’t change the center of mass much — that wasn’t my point. The changes in the solar system will assure that certain objects will have negative interaction energy despite your best efforts to avoid it.

For example: set the energy for an object located at the center of the sun as the zero of energy. Now imagine a comet falls into the sun, making the sun’s mass larger. Well, the interaction energy for an object at the center of the sun just decreased. So in this process, the energy at the center of the sun has now gone negative. This is not very convenient.

Or suppose you put the interaction energy of Mercury and the Sun to be zero. Now in comes a comet; it passes Mercury and goes closer to the sun. Now its interaction energy is negative; do you want to redefine where you put the zero just because a new comet came closer than Mercury?

Best to put the zero of energy at infinity, and not be affected by these details at all.

Hey Matt, thanks again for the clarifications. I think I’ve about got the idea now, between my readings and your responses to my questions. So, I wanted to ask, for purposes of clarification;

Does it make sense to define the “interaction energy” of a multiparticle system, in terms of the individual rest energies, as essentially – in attractive systems – the amount of rest energy that the two particles, when interacting, are able to convert into kinetic energy and, in some form, eject from the interacting-state system (as in the case of a proton-electron pair that ejects a photon/quanta of energy)?

Or, in the case of repulsive forces, the amount of kinetic energy that a particle can, by interaction with another particle, convert into additional rest energy within the interacting-state system?

If one assumes that all physical particles are always trying to enter states with a lower total rest energy (for whatever reasons I don’t yet grasp), as I understand is essentially the case from my limited experience with the basics of chemistry, then that seems to make sense. That said, I’m wondering if I am connecting dots that aren’t there.

Is this an accurate conceptual interpretation of interaction energy, or am I on the wrong track?

I don’t think I’m understanding the way you’re thinking. What do you mean by “individual rest energies”, or by “convert into kinetic energy”? In an atom, the rest energies of the particles are just E_rest = mc^2 for each elementary particle.

The interaction energy of a system of elementary particles would be the Mc^2 for the whole multi-particle system, minus the sum of the mc^2 for each elementary particle, minus the kinetic energies of each particle. That is the simplest way to say it.

I don’t know what it could mean to say “all physical particles are always trying to enter states with a lower total rest energy”. All electrons have the same rest energy: E_rest = m_electron c^2. And physical particles in a multi-particle system don’t do things independently; the

systemdoes things. Within a system, energy is conserved; energy can only be lowered if some energy leaves the system. For example, an atom can fall from an excited state to a less excited state, lowering its total energy and making its interaction energy more negative (though also the electron’s kinetic energy more positive, but not by as much) — but this can only happen if the atom emits a photon, which leaves the atom.So I don’t know where you’re going with this line of thinking, or what you mean. What are the basics of chemistry that you are trying to rely upon?

Upon binding, say a proton and electron, the loss of potential energy has to go *somewhere*, right? I’m pretty sure a 13.6 eV photon is emitted.

The equation m_atom < m_proton + m_electron would be more clearly written, with an addition,

m_atom + m_photon = m_proton + m_electron

(You know what I mean when I say m_photon… of course the equation would be more correctly written in terms of energies to avoid any confusion about the photon having a rest mass.)

Pingback: Courses, Forces, and (w)Einstein | Of Particular Significance

I just wanted to say thank you for the clear and informative articles on your site and for taking the time to produce them and answer people’s questions. I’m sure I speak for many others when I say that your work is really appreciated. Long may it continue!

Pingback: Page not found | Of Particular Significance

Pingback: A Short Break | Of Particular Significance

I get pleasure from, cause I found exactly what I used to bee taking a look for.

You’ve ended my four day long hunt! God Bless you man.

Have a great day. Bye