*For non-expert readers who want to dig a bit deeper. This is the first post of two, the second of which will appear in a day or two:*

In my last post I described, for the general reader and without using anything more than elementary fractions, how we know that each type of quark comes in three “colors” — a name which refers not to something that you can see by eye, but rather to the three “versions” of strong nuclear charge. Strong nuclear charge is important because it determines the behavior of the strong nuclear force between objects, just as electric charge determines the electric forces between objects. For instance, elementary particles with no strong nuclear charge, such as electrons, W bosons and the like, aren’t affected by the strong nuclear force, just as electrically neutral elementary particles, such as neutrinos, are immune to the electric force.

But a big difference is that there’s only **one** form or “version” of electric charge: in the language of professional physicists, protons have +1 unit of this charge, electrons have -1 unit of it, a nucleus of helium has +2 units of it, etc. By contrast, the strong nuclear charge comes in **three** versions, which are sometimes referred to as “redness”, “blueness” and “greenness” (because of a vague but highly imprecise analogue with the inner workings of the human eye). These versions of the charge combine in novel ways we don’t see in the electric context, and this plays a major role in the protons and neutrons found in every atom. It’s the math that lies behind this that I want to explain today; we’ll only need a little bit of trigonometry and complex numbers, though we’ll also need some careful reasoning.

### The Intricacies of “Color”

At any time, a particular quark must [roughly] have either redness +1 or blueness +1 or greenness +1 (and is said to be red, blue or green); any anti-quark has redness, blueness or greenness -1 (and is said to be either anti-red, anti-blue or anti-green). Gluons, unlike photons which are electrically neutral, are not neutral under the strong nuclear force: they carry a color and an anti-color (i.e. a +1 charge and a -1 charge under two of the colors). The details of gluons can easily become confusing, so I’m going to save them til the next post. But this fact implies the gluon field, in which the gluons are ripples, also has color and anti-color.

This color and anti-color of the gluon field has a big impact on the quarks. It means that just by interacting with the gluon field around it, a red quark can turn green, or blue, over and over and over again at an extremely high rate. This makes it impossible to say what color it is. All we can really say safely is that quarks have color (i.e., charge +1 under one of the three strong nuclear charges) and anti-quarks have anti-color (i.e. charge -1 under one of the three) but we cannot say which color at any given time. Metaphorically speaking, as I suggested last time, a quark is almost like a light bulb that is always lit but whose color flickers randomly, continuously and rapidly between red, green and blue. There’s no point in assigning it a definite color.

Meanwhile, to make matters worse, hadrons, the particles in which quarks, anti-quarks and gluons are found in nature, never have any color at all; they are always neutral under the strong nuclear force, because the “colors” of all the particles inside cancel. That’s not so unfamiliar: an atom’s particles have total electric charge zero, too. But it is possible to ionize an atom (for instance by removing an electron, leaving the remainder with a net charge) allowing us to study objects with electric charge in a simple way. It is not, however, possible to color-ionize a hadron; the strong force is simply too strong. Objects with color can never be isolated and studied in detail.

So we can’t ever easily observe objects with color, and even if we did, it would be changing all the time anyway. Clearly, making sense of color is not going to be as simple as making sense of electric charge.

But here’s a fact that we could hope to understand. The most familiar hadrons seen in nature are (Figure 1)

- baryons, which appear at first to be made from three quarks [examples are protons, neutrons, and Lambdas]
- anti-baryons, which appear at first to be made from three anti-quarks [examples are anti-protons, anti-neutrons, and anti-Lambdas], or
- mesons, which appear at first to be made from a quark and an anti-quark [examples are pions, kaons and rho mesons]

Why these combinations and not others? What does this tell us about the strong nuclear force and the three versions of strong nuclear charge? When it was first proposed that the strong nuclear force might have three types of charge, it was believed that the proton is simply made from two up quarks and a down quark, and perhaps nothing else. This is quite different from the modern picture, where the hadrons also have gluons and additional quark-antiquark pairs.

But the baby steps that were made in those days got the basic math of color right. That’s what I want to explain today.

### Math of Two Ordinary Dimensions

The math of the three colors is similar to the math of the three directions of space. So let’s start with the math of two-dimensional space… the math of lines on a plane. That will put us on the right track.

If we take a sheet of paper, we can draw coordinates on it however we want. Cartesian coordinates — the usual x and y coordinates, with axes that are perpendicular to one another — are often convenient. But Cartesian coordinates on paper can be chosen in any orientation we like; the x axis could point eastward and the y axis northward, but it’s just as good to have x point northwestward and y point southwestward. More generally, I could rotate my x and y axes however I like; as long as I rotate them together, they will still make a good Cartesian coordinate system.

Let’s now draw a line on this piece of paper. The line stretches a certain extent L_{x} in the x direction and a certain extent L_{y} in the y direction; see Fig. 2 left. But L_{x} and L_{y} depend on my choice for the x and y axes; for example, if I chose my x axis to lie right along my chosen line, then L_{y} would be zero. If instead I choose my y axis to lie along the line, then it’s L_{x} that will be zero. By rotating the axes, as in Fig. 2 center, I can change both L_{x} nor L_{y}. For the same reason, neither L_{x} nor L_{y} will stay the same if I rotate the line itself (keeping the coordinate axes fixed), as in Fig. 2 right. In technical language, neither L_{x} nor L_{y} is **rotationally-invariant**.

But clearly there **is** something rotationally-invariant about the line: its length, “L”, which I could measure by placing a ruler along the line, a process that doesn’t care about coordinates at all. Nevertheless, using the Pythagorean theorem, I can relate L to L_{x} and L_{y}, as in Figure 2.

- L
^{2}= L_{x}^{2}+ L_{y}^{2}

The fact that this statement must remain true *even if I rotate the axes or I rotate the line* is a remarkable limitation on how L_{x} and L_{y} can change.

It is to capture the math of this miracle that trigonometry was invented; L_{x} can be written as L (cos *θ*), where *θ* is the angle between the line and the x axis; similarly L_{y} can be written as L (sin *θ*). The fact that cos^{2}*θ* + sin^{2}*θ* = 1 for any and all possible *θ* then assures that **no matter how we rotate the line (or the axes), and thus change θ, the length L remains the same.** L

_{x}and L

_{y}depend on our coordinates and so are not rotationally invariant, but L itself is rotationally invariant.

This is not the only rotationally-invariant quantity arising in the context of two-dimensional space. If I have two lines, and then I rotate my axes, or rotate both lines together, the angle between the lines won’t change; it’s something I can measure with a geometric compass, and it doesn’t care how my chosen coordinates on the space are laid out.

Still, if I want to measure the angle using coordinates, there’s a famous way to do it. For simplicity, let’s first imagine that both our lines have length 1. Then if the angle between any two lines is *φ*_{12} (Figure 3 left), its cosine is just the **dot product**: if L_{x1} and L_{y1} are the coordinate lengths of the first line, and L_{x2} and L_{y2} are the same for the second line, then

- cos
= L*φ*_{12}_{x1}L_{x2}+ L_{y1}L_{y2}[dot product for lines of length one]

More generally, for lines of lengths L_{1} and L_{2},

- cos
= (L*φ*_{12}_{x1}L_{x2}+ L_{y1}L_{y2}) / (L_{1}L_{2}) [general dot product]

There’s yet another rotationally invariant quantity in two dimensions. If I have two lines, I can view them as the edges of a parallelogram (Figure 3 right), and the area of that parallelogram can’t depend on my coordinates or how I rotate the pair of lines. I can compute it using the coordinates using the two-dimensional **cross-product** (without assuming the lines have length 1):

- area =
**|**L_{x1}L_{y2}– L_{y1}L_{x2}**|**[cross product]

where the absolute value around the right hand side assures the area is a positive number. (As a check: if the parallelogram is a rectangle oriented along the x and y axes, then L_{x1} = L_{1}, L_{x2} = 0, L_{y1}=0, L_{y2}=L_{2}, and the formula tells us the rectangle then has area L_{1} L_{2}, which is correct.)

What we have just done is understand how we create quantities that are invariant under the rotations of two ordinary coordinates. These rotations form a set, or rather a more structured set called a “group”, that goes by the name SO(2).

### Math of three ordinary dimensions

Now let’s move to SO(3): the rotations of **three **ordinary coordinates. This will bring us much closer to the math of color.

The generalization of lengths and of dot products to three dimensions (x,y,z) is completely straightforward; there’s a three-dimensional generalization of Pythagoras’s theorem and of the usual rules of trigonometry, whose details we don’t need here. The effect is that if I have one line, its length is

- L
^{2}= L_{x}^{2}+ L_{y}^{2}+ L_{z}^{2}

and the angle between two lines of length L_{1} and L_{2} is again given by a dot product.

- cos
= (L*φ*_{12}_{x1}L_{x2}+ L_{y1}L_{y2}+ L_{z1}L_{z2}) / (L_{1}L_{2})

What generalizes the cross product? Being now in three dimensions, we will focus not on the **area** of an object whose edges are formed from two lines but instead on the **volume** of an object whose edges are formed by three lines, as in Figure 4 — often called a parallelpiped. This volume is given by a triple product: it can be viewed as the dot product of the first line with the cross-product of the second and third, or as the dot product of the third with the cross-product of the first and second, and so on. When the dust settles:

- Volume = | L
_{x1}L_{y2}L_{z3}– L_{x1}L_{z2}L_{y3}+ L_{y1}L_{z2}L_{x3}– L_{y1}L_{x2}L_{z3}+ L_{z1}L_{x2}L_{y3}– L_{z1}L_{y2}L_{x3}|

Now this is already quite remarkable. Looking back at Figure 1, mesons seem somewhat analogous to dot products, baryons to triple products. Could there be a connection? Yes, there could; but ordinary dimensions are not enough.

### Math of three complex dimensions

Let’s now imagine that instead of x, y, z being real numbers, we allowed them each to be complex numbers. This space of three complex dimensions is no longer one we can draw; we have to rely mainly on math to understand it.

Keep in mind, also, that these complex dimensions are not to be confused with the dimensions of empty space that we actually **live** in. They are just useful for keeping track of math, and not a concrete part of physical space, in which you and I and other real objects move around and can bump into one another. Only the ** strong nuclear charges of particles** will (later) be associated with this space and change within it; the particles themselves will move around, as always, in our ordinary and familiar three-dimensional space.

Back to the math. With three complex instead of ordinary coordinates, there are now even more rotations than before. Not only could we rotate the x and y axes into each other as we ordinarily do, we could also multiply all the coordinates by a complex phase, such as the basic imaginary number * ℑ*, the square root of -1. This larger set of rotations constitutes the group SU(3).

What’s really new about having complex coordinates is that we now have lines **and** anti-lines; lines have coordinates, and anti-lines have complex conjugate coordinates. The complex conjugate of a line is an anti-line, and vice versa. This is important, because the length of a line can no longer be given by

- L
^{2}= L_{x}^{2}+ L_{y}^{2}+ L_{z}^{2}

because if I multiple x, y, and z by * ℑ*, then L

_{x}→

*L*

**ℑ**_{x}, and L

_{x}

^{2}→ – L

_{x}

^{2}, and the same for L

_{y}and L

_{z}, with the effect that L

^{2}becomes negative, and L becomes imaginary! A length defined this way would not be rotationally-invariant, or even meaningful.

Instead, the right formula for a rotationally-invariant length combines a line with the anti-line that is its complex conjugate:

- L
^{2}= L_{x}L_{x}* + L_{y}^{ }L_{y}* + L_{z}L_{z}*

where L_{x}*, L_{y}*, L_{z}* are the coordinates of the anti-line that is the complex conjugate of the original line (specifically, L_{x}* is the complex conjugate of L_{x}, etc.) This length won’t change if we do a phase rotation; for instance, if we multiply L_{x}, L_{y}, L_{z} by * ℑ*, then L

_{x}*, L

_{y}*, L

_{z}* are multipled by

**–**, and since (

*ℑ**) times (-*

**ℑ***) = –*

**ℑ**

**ℑ**^{2}= +1, that leaves L unchanged.

Similarly, we can no longer take a dot product between two lines. We can only take it between a line and an anti-line. If a line has coordinates L_{x1}, L_{y1}, L_{z1} and and an anti-line has coordinates L_{x2}*, L_{y2}*, L_{z2}**, then a rotationally invariant quantity is *

- L
_{x1}L_{x2}* + L_{y1}^{ }L_{y2}* + L_{z1}L_{z2}*

Although I can’t illustrate this in 3 complex dimensions, I can illustrate it in 1 complex dimension — a single complex plane, shown in Figure 5. If x_{1} is a complex number, and x_{2}* is the complex conjugate of a complex number x_{2}, then x_{1} x_{2} is not invariant under rotation of the complex plane (i.e. rotation of all complex numbers by the same phase,) but x_{1} x_{2}* is invariant because the phase cancels out.

For the triple product, however, it turns out we need either three lines or three anti-lines. The formula is the same as I quoted for SO(3):

- | L
_{x1}L_{y2}L_{z3}– L_{x1}L_{z2}L_{y3}+ L_{y1}L_{z2}L_{x3}– L_{y1}L_{x2}L_{z3}+ L_{z1}L_{x2}L_{y3}– L_{z1}L_{y2}L_{x3}|

That this quantity is rotationally invariant isn’t obvious, but it is true, for the same reason it was true for three ordinary dimensions. The complex conjugate formula is invariant also. It’s not true for two lines and an anti-line.

So in SU(3), as in SO(3), we have dot products and triple products, but now

- dot products can be formed between a line and an anti-line, while
- triple products can be formed between three lines or between three anti-lines.

This now seems strikingly suggestive of mesons, baryons and anti-baryons — see Figure 1 — and the question is how to bring this math to bear on the actual physics.

### The Mesons, Baryons and Anti-Baryons of the 1960s

The three complex dimensions x,y,z we’ve just encountered should be identified, in the context of the strong nuclear force, as redness, greenness and blueness. A quark, therefore, is a like a line in this space: perhaps it points along the redness direction (making it “red”), or along the greenness direction, or perhaps it points in a general direction (making it a combination of blue and red, or perhaps of all three colors.). To specify which direction it points in requires us to choose coordinates — to define what we mean by “redness” and “blueness” and “greenness” — and that’s arbitrary, so we can’t meaningfully say that a quark is red; we could change coordinates and make it half-green/half-blue. However, we can always say that a quark has color: it corresponds to a line in this three-complex-dimensional space, and although the line’s coordinates aren’t rotationally invariant, the fact that it has non-zero length most certainly is.

Meanwhile, if we take a quark and an anti-quark, we can create something from them which is truly colorless: it is completely independent of any choice of our color coordinates. A naive meson consists of **a quark and an antiquark which have been combined using the dot product**: with a quark 1 with red, green and blue coordinates, and an antiquark 2 with similar anti-coordinates, the following combination is “color-less”:

Physically speaking, whatever redness the quark has is balanced by that of the anti-quark, whatever greenness it has is similarly balanced, and so for blueness. This remains true if we redefine what we mean by redness, greenness and blueness. It also remains true as the quark and anti-quark interact with each other; the quark might change from red to green, but when that happens the anti-quark will change from anti-red to anti-green. This kind of synchrony is essential to assure the meson is always colorless.

In the same way, a naive baryon is made from **three quarks combined using the triple product**:

In this case the combined redness, greenness and blueness of the three quarks is colorless, and is independent of how we choose to define the “colors”, as long as all six terms in the above expression are present with the precise choice of plus and minus signs. (An anti-baryon is defined analogously.) What this means is that no two of the quarks in the baryon ever have the same color; and if you know the colors of two of the quarks, the third’s color is automatically determined.

We can now, naively, explain Figure 1 as representing these expressions, shown schematically in Figure 6.

If you learned linear algebra, you will recognize the meson as the product of a quark, written as a three-component vector, with an anti-quark, a three-component conjugate vector; and you may recognize the baryon as the determinant of a 3×3 matrix where the columns are the quarks and the rows are their colors. See Figure 7. These are the simplest SU(3)-rotationally-invariant objects that can be constructed from vectors and conjugate vectors.

But these naive baryons, made from three quarks and nothing else, aren’t the real thing. How do we go from here to real baryons — real protons, with three quarks along with a horde of gluons and quark-antiquark pairs? We only need one small bit of additional math… for next time.

Thank you for a fascinating discussion of fermion colors, which is one of my favorite topics!

I’ve mentioned it before, but given its relevance, I can’t resist bringing up Sheldon Glashow’s 42-year-old visual mnemonic for recalling electric and collar charges on the fermions. In more formal terms, what he did amounts to this:

(a) Forget for a moment that the electric and color charges are separate. And yes, this assertion amounts to grand unification by decree, which is kind of hilarious, to say the least. But as Glashow first noticed, it is always valid and it makes fermion charge relationships a lot easier to visualize and recall.

(b) Using this unification-by-decree, treat the combined color and electric charges on each of the three colors of anti-down as three _singlular_ orthogonal unit vectors in a 3-space generalization of Maxwell’s ancient 1-space electric displacement.

—–

These units become the T3-up basis. Their sum is the colorless, electric-only +1 body diagonal of a cube. Using physical anti-down quarks to do this unit vector sum experimentally gives the spin 3/2 anti-Delta^(-1) baryon with charge +1.

While anti-down quarks are a convenient way to define their values, these units amount to a vector decomposition of positive T3. They are not colors, nor are they pro or anti. They are simply transformation operators in a 3-space in which we can then define both color and electric charge.

The combined electric-color charges of the down quarks similarly provide the negative T3 basis.

All of this results in two cubes, which together give the charges for all fermion types. That was the mnemonic that Glashow mentioned in 1980, although he only showed the positive T3 cube.

What’s fun is that if you combine the two cubes in one 4-space, the fermions become not points, but bridge vectors between corresponding corners of the two cubes. Each such bridge vector is identical to a fermion isospin pair.

This makes the Glashow cubes useful not only for remembering fermion charges but for recalling what weak interactions are permissible. Below are two PDFs with graphical examples of how that works.

A Simple Visual Model of the Weak Force

[Editor: Link removed]

Fermions as 4D Vectors (The Pond Analogy)

[Editor: Link removed]

Sorry, I don’t allow links to private work.

In general, this stuff is fun, yes, but it’s also well known. (You’re just geometrically restating the fact that each generation of Standard Model fermions, plus a sterile neutrino, fits into the spinor representation of SO(10) [cf. also Pati-Salam], a fact well known from the 1970s and first efforts at grand unification.) You can write the math of SO(10) in all sorts of nice geometric ways, to be sure, and people have put all sorts of maps of the fermions on-line. But organizing and reorganizing teaches you about math, not about physics; and it ignores that the bosons do not fit into similar principles. Organizing the bosons along similar lines requires adding new bosons which cause proton decay and also make it difficult to explain the fermions’ masses. This makes a complete mess. So let’s not cherry-pick; it’s unfair to say “look how beautiful the fermions are!” without saying “look how ugly the bosons are!”

No problem at all removing the links! The original figure is page 705 of Glashow’s 1980 “The Future of Elementary Particle Physics,” but alas, it’s paywalled. Also, since I cannot remove it (?), you may want to edit a link I think I put in about two postings back. I think that’s the only one.

Yes, this is old stuff. I mentioned Glashow, but I suspect the mnemonic figure was from Salam.

Symmetry groups are, of course, just a programming language, nothing more. As implemented in many situations, they are a programming language that tends towards being very sloppy about simple issues such as how many bits of precision are in play when describing finite physical systems.

Symmetry group language can be dreadful at capturing small, physically significant results from experimentation. That leads to another form of cherry-picking: Placing mathematical faith over repeated physics results.

Pretend you’re a formula-patten-recognition AI for a moment. Those exist, and frankly, they are better at generation simplest-possible equations than about 99.99% of humankind. Here’s your input: The color-electric charge patterns of all known half-spin particles. Incomplete due to the lack of bosons, sure, but half-spin is… well, kind of unique, yes?

What do you get for output?

Quaternions, more or less, or at least i+j+k. Neutrino charges have none, down-types one, up types two, electron-types three. The AI doesn’t isolate the strong and electric charges _because it does not see that in the data_, and it’s not even a very complicated data set.

One thing I assure you will never get out of an unbiased machine AI looking at fermion charge data is those incredibly bloated symmetry-first matrices of the Standard Model. They work great, mind you, but then programming is like that: There are always many, many ways of recreating the same data results. The matrices of the Standard Model, however, require _ignoring_ the fixed ratios of electric and color on fermions and favoring math first.

That’s purely historical since everyone “knows” electric charge was found first and thus must be, you know, separate. An AI physics equation derivation system doesn’t care about history, just mathematical simplicity for the given data. The easier model is never to separate electric and color charges in the first place and assume that our tendency to do so is just historical bias based on our much-bigger-than-nucleons size. We missed the color memo.

I don’t think I agree with your characterization of what a computer would do. The AI will have a hell of a time figuring out that there are quarks in the first place. Color, after all, is confined, whereas electric charge is not — this is because an SU(3) is completely different *physically* from a U(1), thanks to gluon self-interactions. Only human beings, analyzing this data and doing a huge amount of work, would even figure out that there are quarks and gluons in a proton, and that the quarks are at the same level of elementary structure as the electrons — and only then would put the quarks and electrons into a single table and notice that they fit into a larger symmetry. AI would be distracted by all sorts of irrelevant issues in the data, and would likely still be trying to figure out what a proton is, while still putting electrons, protons and neutrons into symmetry groups.

So you agree there is an extremely simple and invariant pattern in the femion charge data, a pattern that we know about only because of the flat-out magnificent experimental and theoretical (Gell-Mann especially) work in, especially, the late 1970s, before the superstring pure-math-only speculation annihilated the next half century of data-first theorizing. What an amazing time the 1970s were, I loved it!

In any case, ignoring the simplest, most invariant subcomponent of a more complex symmetry is almost never a good idea, in physics or math.

The quaternion equivalence of the fermion charges is good example of just such a small but invariance subset. It is of course embedded in the higher symmetries, and even now I could have stated “quaternion” in terms of symmetries. I don’t, because I think abstract symmetries are a truly terrible programming language for expressing the simplest aspects of an invariance. Even the stupid U(1) group adds a bit of imaginary-axis math noise to the idea of rotational symmetry.

In any case, emphasizing the quaternion subcomponent of the symmetries is not exactly a bad idea. The quatranian unit sphere has some absolutely gorgeous symmetries that most folks are not even aware of, including, for example, a pro-anti layering effect identical to that seen in the Glashow fermion cube.

As for computers not being able to derive that kind simplicity from raw data, I’m afraid it’s kind of the other way around? Computers are many orders of magnitude better that humans at looking through massive sets of data. It was the cognitive insight aspect of the data perusal that slowed that area down for decades, but that’s no longer true. I can look up some references for you if you want, but I haven’t checked in myself for a few years now. It’s likely a bit more advanced by now, since there are commercial incentives that area.

I would say you’re doing the same thing string theorists do. There’s no data supporting the quaternionic idea. The bosons don’t like the idea nearly as much as the fermions do, and you’re ignoring this.

Computers are just barely able to do a decent job at the Large Hadron Collider now. They can only be deployed in limited situations. They are nowhere near capable of discovering the quarks we know of, much less understanding their properties.

Another really interesting article. I have always been fascinated by the fundamental nature of our universe. Question: “as electrically neutral elementary particles, such as neutrinos, are immune to the electric force”

Does this mean that non-fundamental neutral particles, such as the neutron, which is composed of quarks, can be affected by an electrical interaction at a sufficiently close range? In other words, the electrical quarks that make up the neutron all cancel out. But, if I was to fire an electron at the neutron, and the electron got close enough to the “negative quark” inside the neutron, would it interact electrically with the neutron?

Any of the quarks will do, whether negatively or positively charged; an electron that enters the neutron will indeed scatter off them. This is called deep inelastic scattering, and was used to discover quarks in the first place back in 1969-1970; it was initially done for electron-proton scattering, but then extended to electron-deuteron scattering, where the fact that the deuteron has a proton *and* a neutron in it makes the results quite different from electron-proton scattering. In all these cases, the dominant scattering process at high energy and deflection angle involves an electron hitting a quark, not the proton or neutron that contains it.

Moreover, the neutron has a magnetic dipole moment, reflecting its contents. It could in principle have an electric dipole moment too, though there is an unexpected symmetry in the strong nuclear force that makes it too small to measure with current technology.

Follow up question:

The electron enters the proton, which has three quarks. Overall the proton has +1 charge. If the electron enters the proton, and encounters one of the positive quarks, assuming it’s not a head on collision, the electron will “curve” around the quark, since the force is attractive between the positive quark and negative electron, exchanging momentum with it, and come out at some angle. However, what if it’s a head on collision? Would the positive quark and negative electron stick together? Since the force between them is attractive?

Notice that electrons don’t ever stick to protons; they surround them at a distance of order 100,000 times the proton’s radius. Why is that? Well, the answer is given in https://profmattstrassler.com/2022/06/30/the-size-of-an-atom-how-scientists-first-guessed-its-about-quantum-physics/ where I explained the physics that sets the size of an atom. Exactly the same argument tells you that electrons won’t ever stick to quarks either. Your question, as phrased, shows you’re thinking about electrons and quarks as though they are “particles” in the usual sense we mean in English — points that move around like grains of dust. But the wave-like properties of the quarks and especially the electrons are extremely important; they may strike each other head-on, but then they will spread out, as waves will do. The equations which describe this encode the uncertainty principle, which does not allow the electron’s position and momentum to simultaneously be well-specified. In short: no sticking.

Thank You! I kind of figured it had to do with the wave nature of particles. Slightly off topic question: do you think you can do a blog about how physicists can tell the difference between red shifting due to the expansion of space versus red shifting due to “objects” (stars) moving thru space?

Probably not. I don’t think there’s a simple answer; a redshift is a redshift, and how you interpret it depends on your coordinates. But I’d have to think that through.

Regarding a sibling comment thread, I don’t think I follow everything but unit quaternions are (isomorphic to) SU(2) and a double cover of SO(3) so is it that surprising that you can cut down the data and get them showing up?

And re: the AI claims, in my day job I’m a data scientist. I have to assume that by “formula-patten-recognition AI” the author is talking about ‘symbolic regression’ which is absolutely a thing, and sometimes it actually works, but by no means is it ‘unbiased’.

I agree that people get way too excited about quaternions and octonions, which sound fancy but are just restatements of standard things… and furthermore, an organizational principle isn’t enough until it makes predictions, not just postdictions. And it also needs a dynamical principle that goes with it before it can be taken really seriously.

Jake,

It’s not just symbolic regression. Here’s a recent paper on the status of the automated derivation of closed formulas from raw data, including a nice quick summary of references in the first paragraph for uses of such methods in physics. The paper focuses on the impact of noise, which is fine if you know what is actual noise and what is not. Still, it also raises a question: If you grab 1000 pi digits from far into the sequence, how do you know from the digits themselves whether it’s noise or generated by a brief pattern?

“Fundamental limits to learning closed-form mathematical models from data”

Fajardo-Fontiveros et al, April 2020

https://arxiv.org/pdf/2204.02704.pdf

Noise is not the problem. You seem to have a very simplified view of what particle physics actually involves, both from the experimental and theoretical point of view. The formulas of particle physics involve infinite dimensional integrals and complicated resummation of perturbation theory; the calculations must be very carefully managed to avoid unphysical infinities, and defining observables properly often requires sophisticated mathematical methods that AI would struggle to rediscover. Meanwhile the data is also extremely complex; it’s not even obvious what the objects of analysis should be when it comes fo high-energy quarks and gluons, and information is always being lost down the beampipe, so every observed collision event is intrinsically incomplete. This is all a very long, long way from 16 points on a hypercube. As I said, efforts to actually apply these methods to physics at the Large Hadron Collider — not trying to rederive physics from scratch, but just characterize the details of particular measurements — are still in their infancy.

Hi Matt Strassler,

You do realize that after checking a few papers, I was agreeing with you and Jake about the weakness of AI methods for topics such as collider physics? We’re still in the the early neural-net era. That’s been incredibly powerful commercially, but it’s not even real AI, just mass-production trained perception with enough hardware to play many cool tricks.

AI and collider methods were never the issues, of course. The issue is this: Why do quarks always have fixed ratios of color and electric charge? You and I both know that all that data you are looking at, and all those integrals, aren’t going to change those ratios one whit. The simplest interpretation of those ratios is that color and electric charge are a single quantity at the quark level: scale-dependent grand unification. If taking Glashow’s cube seriously converts photons into the missing ninth gluon, so what? The existing Standard Model remains incredibly complete and predictive.

The usual impact of simpler models is not to undo work, but make new dynamic predictions and illuminate new connections. Is that bad?

Predictions would be great. Here’s one: sterile neutrinos (the 16th particle in each generation). They haven’t shown up yet. Here’s another. The gauge bosons should respect the structure. They don’t. So the structure is broken. Also the Higgs and its interactions with the fermions should respect the symmetries. But the masses don’t have anything to do with the generational structure. So far your predictions aren’t working out at all.

[This is a massively corrected repeat of my 12:45 AM comment. Please delete the 12:45 AM version. And this one, too, if you prefer. It’s your blog! 🙂 ]

———-

I laughed out loud at your “you’re being a string theorist” rejoinder! Touche, good sir, you made my day happier!

An example of the type of raw data where computers have made serious, and even unsettling, progress in recent is the astronomical data Kepler inherited from Tycho Brahe. Kepler spent years looking at that data to derive mathematically reliable rules for predicting planetary motions. His realization that, in certain cases, vast sets of data reduce down to simple equations was one of the most important transitions in the history of science.

My point was that a modern computer with the right software could peruse that same raw data and uncover those same equations in seconds or less. Think of it as an especially extreme and insightful form of data compression.

It’s easy these days to accept the amazing effectiveness of equations for representing large data sets. For Kepler, it was more like heresy, possibly literally.

That Kepler’s heresy worked seems to me one of the most amazing mysteries of our universe. Why are these simple programs we call equations so incredibly effective at representing and compressing gigantic sets of raw data? The existence of such compressions is a fascinating statement about the fabric of our universe.

As for “no data for quaternionic structure,” why did you quote those earlier symmetries if you don’t accept them? As you pointed out, I’m only saying the same old stuff in different terms, not changing it. That’s why I keep calling this a mnemonic, rather than something new. Making well-known, well-accepted rules easier to visualize is not the same as creating new ones.

I assume you accept that neutrinos and electrons were, and in some sense still are, the “same” particle, one that froze into its current T3 up-or-down versions as the universe cooled. And I think you said explicitly that some quarks have 1/3 electric charge and others 2/3 electric charge, and they come in three colors and three anti colors. Finally, even if you have not seen a copy of it, I assume you accept that Glashow’s 1980 fermion charge mnemonic is a valid re-arrangement of decades of data on hadron behaviors.

If you accept all of the above I don’t need to defend the existence of a quaternion-like structure in the fermion data set. Print a copy of Glashow’s 1980 mnemonic, pencil in i, j, and k on the lower three edges of his cube, and voila! there is your quaternionic structure. Don’t blame me for that one, blame Glashow. He, or more likely Salam, noticed that structure decades before I and others did. And sure, you can name the three axes anything you want, but ijk is convenient and has symmetries that respect the fermion data.

What is unsettling about Glashow’s cube is its implications if taken as a theory rather than as a mnemonic. Its theory interpretation suggests the relationship between electric and color charges is not a “simple” symmetry break. Instead, it implies _two_ levels of electric-color separation: Full separation for electrons and incomplete separation for quarks. First, ouch! Second, what does that even mean?

Big, abstract symmetry groups are lousy for expressing such distinctions for the same reason that jackhammers are lousy for repairing watches. They tend to overlook “minor” details such as asymmetric layering or emergence.

Finally, you make a superb point about me ignoring bosons in this discussion of fermions. Guilty as charged! Messy things, bosons, especially in nucleons. However, my take is that the situation for bosons is even worse than recognized, but also more interesting since for bosons you get deeper into the roles and natures of space and time.

As I said in reply to another comment, your understanding of how particle physics is actually done, both in the theoretical domain and in the experimental domain, is lamentably naive. Don’t you think particle physicists are using AI? Of course they are. But it doesn’t have anywhere near the capabilities that you claim.

/(.) dot products can be formed between a line and an anti-line, while

(.) triple products can be formed between three lines or between three anti-lines./

Cartesian coordinates are the human Cognitive perception mostly linear “one photon 👁️” the third eye give information about “Diffeomorphism (allmost non-repeating pattern, a pandemonium), the physical reality.

We infer from this information, there is color charges, again using the Cartesian prespective (#Manichaeism) which is not a physical reality.

This two photon effect( 👀) is from outside the Universe, a localized “quantum reality” at Equivalence principle could discerned from “The statement that photons and gravitons are NG bosons”.

Photons has no charge or color charge like gluons, but a non-zero length in angular momentum [SO(3)] and zero-length at c^2, is prevented in color charge?

If the outside influence which change “rotational variance (non-identical mirror image at decay)” to “rotational invariance (proton mass)”, vanish….

like said in Hawking radiation, start proton decay?

/“redness”, “blueness” and “greenness” (because of a vague but highly imprecise analogue with the inner workings of the human eye)./

The Conjugative color charges are quantum reality, not a physical reality. This quantum reality also happens in Black hole (Black hole information paradox).

/Vast sets of data reduce down to simple equations was one of the most important transitions in the history of science./

It is like in Trigonometry, in order to reconcile quantum mechanics with black holes, Chapline theorized that a phase transition in the phase of space occurs at the event horizon.

/The physical reality of Time dilation and Rest mass cannot coexist/.

Time dilation is a Physical reality and the

Rest Mass is a Quantum reality !?

[This is my FOURTH attempt to reply to Matt Strassler’s “quaternion fever” comment. I lost my entire text the first time! I’m top-threading this attempt, but I’m losing hope. Matt, if you have a block — and it’s your right to do so, this is _your_ blog, not mine — could you please let me know?]

Hi Matt Strassler,

Regarding quaternions, it would be deeply unfair for me to blame John Baez and other quaternion advocates for my labeling the 1980 Glashow fermion mnemonic cube, which goes like this:

{i, j, k} = +⅓ + {~r, ~g, ~b}

{-i, -j, -k} = {I, J, K} = -⅓ + {r, g, b}

The above equations have _no_ relation to Baez et al.’s years-long advocacy of quaternions and octonions. I only needed a set of axis labels other than {r, g, b} to formalize Glashow’s 1980 cube structure, and of the infinite number of square-roots-of-1-and-minus-one systems possible, good ol’ simple {i, j, k} works nicely. John Baez would never agree with the above equations since he once wrote what is still, I think, one of the best explanations on the internet on why there are only eight gluons instead of 9 in the Standard Model.

I think the main reason Glashow presented that cube as just a mnemonic is that, if taken as a theory, it implies that the electroweak and strong forces unify not at high energies but at spacetime scales less than about 10^(-15) m — that is, inside baryons and mesons. Also, unavoidably, the photon becomes the missing infinite-range gluon. The Standard Model excludes this gluon because there is no evidence of them outside of hadron interiors. Well, yes. That’s because whenever we see them, we call them photons.

It doesn’t matter much. The 8-gluon bases work magnificently and are one of the most impressive results of the Standard Model. The only thing the Glashow cube equations say is that if you start at the foundation level — which, of course, is not the case — then significantly simpler equations and models would emerge by adding size-dependent force unification to the heuristics mix. You, um, also have to rewrite space and time a bit?

The electromagnetic force is still separate from the other, but only at scales larger than atomic nuclei. Inside nucleons, nope. You can still model electroweak and strong as independent forces at the sub-hadron level using spacetime-lattice-QCD. Still, the astronomical pseudo-energy levels created by highly detailed lattices inevitably make such models messy.

The non-physical energy implications of fine-grained QCD lattices make them highly effective and delightfully ironic. It’s like shining a gigawatt laser (the lattice) on the interior of a watch while hosing it with liquid nitrogen (renormalization) to keep it from vaporizing. You _do_ get a bright, shiny view, but please be sure not to mistake all that frenetic bubbling-freezing at the metal surfaces with how the watch works when left alone.

The idea that spacetime is “different” inside nucleons is quite old. Unfortunately, the original rather vague idea gave rise to quark-denying S-matrix and, even more oddly, to superstrings via the S-matrix obsession with “abstract” equations. Real hadronic strings used quarks at the ends of string-like flux tubes, while “superstrings” replaced all that silly physical stuff with mammal brains trained to get endorphin kicks from pretty equations. In exploring S-matrix, I never encountered any papers on the idea that small scales might unify forces, but I try never to underestimate what’s in the deep literature.

Sorry, but — what can I say? “The electromagnetic force is still separate from the other, but only at scales larger than atomic nuclei. ” That’s simply false, according to the data of the last 40 years.

Hi Matt Strassler,

>… “at scales larger than atomic nuclei.”

That’s simply false according to data of the last 40 years.

Only 40? Gell-Mann & Zweig saw quarks in 1964, ~60 years ago. Gross & Wilczek saw asymptomatic freedom in 1973, ~50 years ago. What 1980s data proved that confined quarks stop having fractional electric charge at sufficiently small scales?

[long list of speculations truncated by host]

Data has long ago proved that the electromagnetic and strong nuclear forces do not unify at the scale of the proton/neutron.

The world is made of more than just charges, you know; plots like this one, https://i.stack.imgur.com/B1lvU.jpg, and thousands of others, need to be explained too. The Standard Model does this; your speculative alterations to it certainly will not.

You know, I do a lot of thinking, and we do have equations for these things — **and they work**.

I think this conversation needs to stop now; up to now I gave you benefit of doubt, but now you’re starting to make all sorts of wrong statements about things that I understand and you do not.

Hi Matt Strassler,

>… Predictions [from your isovector model] would be great. Here’s one: sterile neutrinos (the 16th particle in each generation). They haven’t shown up yet.

Finally, a chirality question! AI grousing is excellent fun, but now we’re getting somewhere interesting!

The sterile neutrinos are real, but you aren’t going to see them in this universe except in exceptionally fleeting events. Those events may or may not connect to that fascinating stat you mentioned about neutrinos’ considerable mixing angles. I’ve got to look at that issue more closely.

[Edited by host:]

Hi Matt Strassler,

Again, this is your blog, please delete any of my comments, it truly does not bother me! You may be surprised to know I truly enjoy your comments. Also, you’ve moved me from being mostly indifferent to the LHC and its funding to believing it’s one of the most important experiments in the history of science, one that the international community absolutely must continue to fund.

What is different about the strong force that a Quark and Anti-Quark don’t annihilate in a Meson? For the electromagnetic field if I combined a positron and an electron they would collapse into each other after a short time and annihilate each other.

Great question.

Remember the quarks come in different flavors. A meson that’s made from one quark and a different antiquark can last a long time, just as an atom made from an electron and an anti-muon would last until the latter decayed.

However,. mesons made from a quark and the corresponding anti-quark indeed have shorter lifetimes, because they can annihilate to gluons or photons. (There are subtleties with the very light ones, because they cannot decay via gluons; they have to decay electromagnetically. But still, the decays are quite fast.).

Still, even in positronium, the electron and positron orbit each other a very large number of times before they annihilate. It’s far from instantaneous.

The lifetimes of bound states cover a very wide range, for many different reasons, and that’s a whole week of a particle physics course.