*Matt Strassler [August 27 - September 9, 2013]*

**What is “Naturalness”?**

*[This subject is closely related to the hierarchy problem.]*

What do particle physicists and string theorists mean when they refer to a particular array of particles and forces as “natural”? They don’t mean “part of nature”. Everything in the universe is part of nature, by definition.

The word “natural” has multiple meanings. The one that scientists are using in this context isn’t “having to do with nature” but rather “typical” or “or “generic” — “just what you’d have expected”, or “the usual” — as in, “naturally the baby started screaming when she bumped her head”, or “naturally it costs more to live near the city center”, or “I hadn’t worn those glasses in months, so naturally they were dusty.” And unnatural is when the baby doesn’t scream, when the city center is cheap, and when the glasses are pristine. **Usually, when something unnatural happens, there’s a good reason.**

In most contexts in particle physics and related subjects, surprises — big surprises, anyway — are pretty rare. That means that if you look at a physical system, it usually behaves more or less along lines that, with some experience as a scientist, you’d naturally expect. If it doesn’t, then (experience shows) there’s generally a really good reason… and if that reason isn’t obvious, the unnatural behavior of the system may be pointing you to something profound that you don’t yet know.

For our purposes here, the reason the notion of naturalness is so important is that there are two big surprises in nature that we particle physicists and our friends have to confront. The first is that the *cosmological constant* [often referred to as ``dark `energy' '' in public settings] is amazingly small, compared to what you’d naturally expect. The second is that the hierarchy between the strength of gravity and the strengths of the other forces is amazingly big, compared to what you’d expect.

The second one can be restated as follows: the Standard Model (combined with Einstein’s theory of gravity) — the set of equations we use to predict the behavior of all the known elementary particles and all the known forces — is a profoundly, enormously, spectacularly *unnatural* theory. There’s only one aspect of physics — perhaps only one aspect in all of science — that is more unnatural than the Standard Model, and that’s the cosmological constant.

**The Notion of “Natural” and “Unnatural”**

I think the concept of naturalness is best illuminated by a bit of story-telling.

A couple of friends of mine from college (I’ll call them Ann and Steve) got married, and now have two teenage children. Back when their kids were younger — say, 4 and 7 years old — they were pretty wild. They often played rough, got mad at each other, threw things, and generally needed at lot of supervision.

One day, Ann bought some beautiful flowers and put them in her favorite glass vase. But before she put the vase on the kitchen table, the doorbell rang. She ran to the front, carrying the vase, and as she made her way to the door, she absent-mindedly put the vase down on the small, rickety table that sits by the wall of the kids’ play room.

Half an hour later, Steve returned home with the kids, and sent them into the play room to occupy themselves while he and Ann settled in from the day and prepared dinner. They heard the usual sounds: bumps and crashes, the sounds of bouncing balls and falling blocks, yells of “no fair” and “ow! stop that!”, a moment of screaming that blissfully stopped almost as soon as it started…

It was forty-five minutes later when Ann noticed the vase with the flowers wasn’t on the kitchen table. After a moment searching the kitchen and dining room, she suddenly realized that she’d put it down and forgotten it in the most dangerous place in the house.

So she went running into the play room, hoping she wasn’t too late. And what do you think she found when she opened the door?

Guess. You get three options (Figure 1). Choose the most plausible.

- The vase was exactly where she’d left it, comfortably placed at the center of the table.
- The vase was smashed, and the flowers crushed, down on the floor.
- The vase was hanging off the table, right at the edge, within a millimeter of disaster.

Well, the answer is #3. There it was, just hanging there.

Somehow I suspect you don’t believe me. Or at least, if you do believe me, you probably are assuming there must be some complicated explanation that I’m about to give you as to how this happened. It can’t possibly be that two young kids were playing wildly in the room and somehow managed to get the vase into this extremely precarious position just by accident, can it? For the vase to end up *just so* — not firmly on the table, not falling off the table, but just in between — that’s … **that’s not natural!**

There must (mustn’t there?) be an *explanation. *

Maybe there was glue on the side of the table and the vase stuck to it before falling off? Maybe one of the kids was hiding behind the table and holding the vase there as a practical joke on his mom? Maybe her husband had somehow tied a string around the vase and attached it to the table, or to the ceiling, so that the vase couldn’t fall off? Maybe the table and vase are both magnetized somehow…?

Something so unnatural as that can’t just end up that way on its own… especially not in a room with two young children playing rough and throwing things around.

**The Unnatural Nature of the Standard Model**

Well. Now let’s turn to the Standard Model, combined with Einstein’s theory of gravity.

I want you to imagine a universe much like our own, described by a complete set of equations — a “theory”, in theoretical-physics speak — much like the Standard Model (plus gravity). To keep things simple, let’s say this universe even has all the *same* elementary particles and forces as our own. The only difference is that the strengths of the forces, and the strengths with which the Higgs field interacts with other known particles and with itself (which in the end determines how much mass the known particles have) are a little bit different, say by 1%, or 5%, or maybe even up to 50%. In fact, let’s imagine ALL such universes… all universes described by Standard Model-like equations in which the strengths with which all the fields and particles interact with each other are changed by up to 50%. What will the worlds described by these slightly different equations (shown in a nice big pile in Figure 2) be like?

Among those imaginary worlds, we will find three general classes, with the following properties.

- In one class, the Higgs field’s average value will be zero; in other words, the Higgs field is OFF. In these worlds, the Higgs particle will have a mass as much as
**ten thousand trillion**(10,000,000,000,000,000) times larger than it does in our world. All the other known elementary particles will be massless (up to small caveats I’ll explain elsewhere). In particular, the electron will be massless, and there will be no atoms in these worlds. - In a second class, the Higgs field is FULL ON. The Higgs field’s average value, and the Higgs particle’s mass, and the mass of all known particles, will be as much as
**ten thousand trillion**(10,000,000,000,000,000) times larger than they are in our universe. In such a world, there will again be nothing like the atoms or the large objects we’re used to. For instance, nothing large like a star or planet can form without collapsing and forming a black hole. - In a third class, the Higgs field is JUST BARELY ON. It’s average value is roughly as small as in our world — maybe a few times larger or smaller, but comparable. The masses of the known particles, while somewhat different from what they are in our world, at least won’t be wildly different. And none of the types of particles that have mass in our own world will be massless. In some of those worlds there can even be atoms and planets and other types of structure. In others, there may be exotic things we’re not used to. But at least a few basic features of such worlds will be recognizable to us.

Now: what fraction of these worlds are in class 3? Among all the Standard Model-like theories that we’re considering, what fraction will resemble ours at least a little bit?

The answer? A ridiculously, absurdly tiny fraction of them (Figure 3). If you chose a universe at random from among our set of Standard Model-like worlds, the chance that it would look vaguely like our universe would be spectacularly smaller than the chance that you would put a vase down carelessly on a table and end up putting it right on the edge of disaster, just by accident.

In other words, **if** (and it’s a big “if”) the Standard Model (plus gravity) describes everything that exists in our world, then among all possible worlds, we live in an extraordinarily unusual one — one that is as unnatural as a vase nudged to within an atom’s breadth of falling off the table. Classes 1 and 2 of universes are natural — generic — typical; most Standard Model-like theories would give universes in one of those classes. Class 3, of which our universe is an example, includes the possible worlds that are extremely non-generic, non-typical, unnatural. That we should live in such an unusual universe — especially since we live, quite naturally, on a rather ordinary planet orbiting a rather ordinary star in a rather ordinary galaxy — is unexpected, shocking, bizarre. And it is deserving, just like the weirdly placed vase, of an explanation. One certainly has to suspect there might be a subtle mechanism, something about the universe that we don’t yet know, that permits our universe to *naturally* be one that can live on the edge.

And what is the analogy to the playing children who endanger the vase, and make its balanced condition especially implausible? It is quantum mechanics itself — the very basic operating principles of our world. Quantum effects do not coexist well with accidental, unstable balance.

I’ll go on to discuss those quantum effects, and how they make the Standard Model unnatural, in a moment. But first, although I hope you liked my story, I should point out there’s one important difference between the vase on the table and the universe. If somebody bumps the table or the vase, it will probably fall off, or perhaps, if we’re lucky, slide toward the center of the table. In other words, it can easily move away from its precarious position if it is disturbed. Our universe, by contrast, is not in danger *currently* of smoothly shifting its properties, and becoming a universe in Class 1 or Class 2. *[While it is possible that someday it could shift suddenly to become a very different universe, through a process known as tunneling or vacuum decay, this event is likely to be unimaginably far off; this is a subject for another day, but it's not something to worry about.]* The real issue for the universe is in the past: how, among the vast number of possible universes, did we end up in such an apparently unnatural one? Is there something about our universe that we don’t yet know which makes it not as unnatural as it seems? Or perhaps the fact that many (most?) natural universes don’t seem hospitable for life has something to do with it? Or maybe we humans haven’t been clever enough yet, and there some other subtle scientific explanation? Whatever the reason, either it is due to a timeless fact or due to something that happened very long ago; the universe (or at least the large region we can see with our eyes and telescopes) has been unchangingly unnatural [if the Standard Model fully describes it] for billions of years, and won’t be changing anytime in the near future.

In any case, let’s move on now, to understand the quantum physics that makes a universe described by the Standard Model (and gravity) so incredibly unusual.

**Quantum Physics and (un)Naturalness*** *

At this point, please read about quantum fluctuations of quantum fields, and the energy carried in those fluctuations, if you haven’t already done so. Along the way you’ll find out a little about another naturalness problem: the cosmological constant. After you’ve read that article, you can continue with this one.

**Back to the Higgs (and Other Similar Particles)**

Quantum fluctuations of fields, and their contribution to the energy density of empty space (the so-called “vacuum energy”) play a big part in our story. But our goal here requires we set the cosmological constant problem aside, and focus on the Higgs particle and on why the Standard Model is unnatural. This is not because the cosmological constant problem isn’t important, and not because we’re totally certain the two problems are completely unrelated. But since the cosmological constant has everything to do with gravity, while the problem of the Higgs particle and the naturalness of the Standard Model doesn’t have anything to do with gravity directly, it’s quite possible they’re solved in different ways. And each of the two problems is enormous on its own; if in fact we need to solve them simultaneously, then the situation just gets worse. So let’s just send the cosmological constant to a far corner to take a little nap. We do need to remember that it’s the elephant in the room that we can’t forever ignore.

Ok — about the Higgs field. There are three really important questions about the Higgs field and particle that we want to answer. [I'll phrase all these questions assuming the Standard Model is right, or close to right, but if it isn't, don't worry: the ideas I'll explore remain essentially the same, even though slightly different phrasing is required.]

- The Higgs field is “ON” — its average value, everywhere and at all times, at least since the very early universe, isn’t zero.
**Why is it on?** - Its average value is 246 GeV.
**What sets its value?** - The Higgs particle has a mass of about 125 GeV/c².
**What sets this mass?**

I’m going to explain to you how and why these questions are related to the issue of how the energy of empty space (part of which comes from quantum fluctuations of fields) depends on the Higgs field’s average value.

**The Higgs Field’s Value and the Energy of Empty Space**

For any field — not just the Higgs field — how is it determined what the average value of the field is in our universe? Answer: a field’s average value must have the following property: if you change the value by a little bit, larger or smaller, then the energy in empty space must increase. In short, the field must have a value for which the energy of empty space is at a **minimum** — not necessarily *the* minimum, but *a* minimum. *(If there is more than one minimum, than which one is selected may depend on the history of the universe, or on other more subtle considerations I won’t go into now.)*

A couple of illustrative examples of how the energy of empty space in our universe, or in some imaginary universe, might depend on the Higgs field, or on some other similar field, are shown in Figure 4. In each of the two cases I’ve drawn, there happen to be two minima where the Higgs field could sit — but that’s just chance. In other cases there could be several minima, or just one. The fact that the Higgs field is ON in our world implies there’s a minimum in the universe’s vacuum energy when the Higgs field has a value of 246 GeV. While it’s not obvious from what’s I’ve said so far, we are confident, from what we know about nature and about our equations, that there is no minimum when the Higgs field is zero, and that’s why our universe’s Higgs field isn’t OFF. So in our universe, the dependence of the vacuum energy on the Higgs field probably looks more like the left-hand figure than the right-hand one, but, as we’ll see, it may not look much like either of them. If the Standard Model describes physics at energies much above and at distances much shorter than the ones we’re studying now at the Large Hadron Collider (LHC), then the form of the corresponding curve is much more peculiar — as we’ll see later.

**The Higgs Particle’s Mass and the Energy of Empty Space**

What about the Higgs particle’s mass? It is determined (Figure 4) by how quickly the energy of empty space changes as you vary the Higgs field’s value away from where it prefers to be. Why?

A Higgs particle is a little ripple in the Higgs field — i.e., as a Higgs particle passes by, the Higgs field has to change a little bit, becoming in turn a bit larger and smaller. Well, since we know the Higgs field’s average value sits at a minimum of the energy of empty space, any small change in that value* slightly increases* that overall energy a little bit. This extra bit of energy is *[actually half of]* what gives the Higgs particle its mass-energy (i.e., it’s E=mc² energy.) If the shape of the curve is very flat near the minimum (see Figure 4), the energy required to make a Higgs particle is rather small, because the extra energy in the rippling Higgs field (i.e., in the Higgs particle) is small. But if the shape of the curve is very sharp near the minimum, then the Higgs particle has a big mass.

Thus it is the flatness or sharpness in the curve in the plot, at the point where the Higgs field’s value is sitting — the “curvature at the minimum” — that determines the Higgs particle’s mass.

**Why It Isn’t Easy to Have The Higgs Particle’s Mass Be Small**

The Higgs particle’s mass is measured to be about 125-126 GeV/c², about 134 times the proton‘s mass. Now why can’t we just put that mass into our equations, and be done with this question about where it all comes from?

The problem is that the Higgs field’s value, and the Higgs particle’s mass, aren’t things you put directly * into* the equations that we use; instead, you extract them, by a complex calculation,

**the equations we use. And here we run into some difficulty…**

*from*We get these two quantities — the average value and the mass of the field and particle — by looking at how the energy of empty space depends on the Higgs field. And that energy, as in any quantum field theory like the Standard Model, is a sum of many different things:

- energy from the fluctuations of the Higgs field itself
- energy from the fluctuations of the top quark field
- energy from the fluctuations of the W field
- energy from the fluctuations of the Z field
- energy from the fluctuations of the bottom quark field
- energy from the fluctuations of the tau lepton field
- …

and so on for all the fields of nature that interact directly with the Higgs field… I’ve indicated these — *schematically! these are not the actual energies* — as blue curves in Figure 5. Each plot indicates one contribution to the energy of empty space, and how it varies as the Higgs field’s average value changes from zero to the maximum value that I dare consider, which I’ve called v_{max}.

*[Note: Some of you may have read that these calculations of the energy of empty space give infinite results. This is true and yet irrelevant; it is a technicality, true only if you assume v _{max} is infinitely large --- which it patently is not. I have found that many people, non-scientists and scientists alike, believe (thanks to books by non-experts and by the previous generations of experts -- even Feynman himself), that these infinities are important and relevant to the discussion of naturalness. This is false. We'll return to this widespread misunderstanding, which involves mistaking mathematical technicalities for physically important effects, at the end of this section.]*

What is v_{max}? It’s as far as one could can push up the Higgs field’s value and still believe our calculations within the Standard Model. What I mean by v_{max} is that if the Higgs field’s value were larger than this (which would make the top quark’s mass larger than about v_{max}/c^{2}) then the Standard Model would no longer accurately describe everything that happens in particle physics. In other words, v_{max} is the boundary between where the Standard Model is applicable and where it isn’t.

However, we don’t know what v_{max} is… and that ignorance is going to play a role in the discussion. From what we know from the LHC, v_{max} appears to be something like 500 GeV or larger. However, for all we know, v_{max} could be as much as 10,000,000,000,000,000 times larger than that. We can’t go beyond that point, because that’s the (maximum possible) scale at which gravity becomes important; if v_{max} were that large, top quarks would be so heavy they’d be tiny black holes! and we know that the Standard Model can’t describe that kind of phenomenon. A quantum mechanics version of gravity has to be invoked at that point… if not before!

So again, what we know is that v_{max} is somewhere between 500 GeV and 1,000,000,000,000,000,000 GeV or so. In Figure 5, I’ve assumed it’s quite a bit bigger than 500 GeV; we’ll look in Figure 6 at the case where v_{max} is close to 500 GeV.

Each one of the contributions in the upper row of Figure 5 is something we can (in principle, and to a large extent in practice) calculate, for any Higgs field value between zero and v_{max}, and for all quantum fluctuations with energy less than about v_{max}.* [I'm oversimplifying somewhat here; really this energy E _{max} need not be quite the same as v_{max}, but let's not get more complicated than necessary.]* If v

_{max}is big, then each one of these contributions is

**really big**— and more importantly, the variation as we change the Higgs field’s value from zero to v

_{max}is big too — something like v

_{max}

^{4}/(hc)

^{3}… where h is Planck’s quantum constant and c is the universal speed limit, often called “the speed of light”.

But that’s not all. To this we have to add other contributions, shown in the second row of Figure 5, which come from *physical phenomena that we don’t yet know anything or much about*, physics that does not directly appear in the Standard Model at all. *[Technically, we absorb these effects from unknown physics into parameters that define the Standard Model's equations, as inputs to those equations; but they are inputs, rather than something we calculate, precisely because they're from unknown sources.]* In addition to effects from quantum fluctuations of known fields with even higher energies, there may also be effects from

- the quantum mechanics of gravity,
- heavy particles we’ve not yet discovered,
- forces that are only important at distances far shorter than we can currently measure,
- other more exotic contributions from, say, strings or D-branes in string theory or some other theory like it,
- etc.,

some of which may depend, directly or indirectly, on the Higgs field’s value. I’ve drawn these unknown effects in red; note that these curves are pure guesswork. We don’t know anything about these effects except that they *could* exist (and the gravity effects ** definitely** exist), and that some or all of them could be

**really big**… as big as or bigger than the ones we know about in the upper row. In principle, all these unknown effects

*could*be zero — but that wouldn’t resolve the naturalness problem, as we’ll see, so presumably they’re

*not*all zero.

What’s crucial here is that * there’s no obvious reason to expect these unknown effects in red are in any way connected with the known contributions in blue*. After all, why should quantum gravity effects, or some new force that has nothing to do with the weak nuclear force, have anything to do with the energy density of quantum fluctuations of the top quark field or of the W field? These seem like conceptually separate sources of the energy density of empty space.

And here’s the puzzle. When we add up all of these contributions to the energy of empty space *[Unsure how to add curves like these together? Click here for an explanation...] *— each of which is big and many of which vary a lot as the Higgs field’s value changes from zero to the maximum that we can consider — we find **an incredibly flat curve**, the one shown in green. It’s almost perfectly flat near the vertical axis. And yet, its minimum is *not quite* at zero Higgs field; it’s slightly away from zero, at a Higgs field value of 246 GeV. *All of those different contributions in blue and red, which curve up and down in varying degrees, have almost (but not quite) perfectly canceled each other when added together.* It’s as though you piled a few mountains from Montana into a deep valley in California and ended up with a plain as flat as Kansas. **How did that happen?**

Well, how bad is this problem? How surprising is this cancellation? The answer is that it depends on v_{max}. If v_{max} is only 500 GeV, then there’s no real cancellation needed at all — see Figure 6. But if v_{max} is huge, the cancellation is incredibly precise, as in Figure 5. The larger is v_{max}, the more remarkable it is that all the contributions cancelled.

How remarkable? The cancellation has to be perfect to something like one part in (v_{max}/500 GeV)^{2}, give or take a few. So if v_{max} is close to 500 GeV, that’s no big deal; but if v_{max} = 5000 GeV, we need a cancellation to one part in 100. If it’s 500,000 GeV, we need cancellation to one part in a million.

And if we take v_{max} as high as possible — if the Standard Model describes all non-gravitational particle physics — then **we need cancellation of all these different effects to one part in about 1,000,000,000,000,000,000,000,000,000,000.**

In the last case, the incredible delicacy of the cancellation is particularly disturbing. It means that if you could alter the W particle’s mass, or the strength of the electromagnetic force, by a tiny amount — say, one part in a million million — the cancellation would completely fail, and you’d find the theory would be in Class 1 or Class 2, with a ultra-heavy Higgs particle and either a large or absent Higgs field value (see Figure 3). This incredible sensitivity means that the properties of our world have to be, very precisely, *just so* — like a radio that is set exactly to the frequency of a desired radio station, ** finely tuned**. Such extreme “fine-tuning” of the properties of a physical system has no precedent in science.

To say this another way: what’s unnatural about the Standard Model — specifically, about the Standard Model being valid up to the scale v_{max}, if v_{max} is much larger than 500 GeV or so — is the cancellation shown in Figure 5. It’s not generic or typical… and the larger is v_{max}, the more unnatural it is. If you take a bunch of generic curves like those in Figure 5, each of which has minima and maxima at Higgs field values that are either at zero or somewhere around v_{max}, and you add those curves together, you will find that the sum of those curves is a curve that also has its minima and maxima at

- a substantial fraction of v
_{max}[**Class 2**theories -- see Figure 3], - or at zero [
**Class 1**theories], - but not somewhere non-zero that is
*much much smaller*than v_{max}[**Class 3**theories].

Moreover, if the curves are substantially curved near their minima and maxima, their sum will also typically have substantial curvature near their minima and maxima [i.e. the Higgs particle's mass will be roughly v_{max}/c^{2}, as in Class 1 and Class 2 theories], and won’t be extremely flat near any of its minimum [needed for the Higgs particle to be much lighter than v_{max}/c^{2}, as occurs in Class 3 theories.] This is illustrated, for the addition of just two curves, in Figure 7, where we see the two curves have to have a very special relationship if their sum is to end up very flat.

That’s the naturalness problem. It’s not just that the green curve in Figure 5 is remarkably flat, with a minimum at a small Higgs field value. It’s that this curve is an * output*, a sum of many large and apparently unrelated contributions, and it’s not at all obvious how the sum of all those curves comes out to have such an unusual shape.

**An Aside About Infinities, Renormalization, and Cut-offs**

[You can skip this little section if you want; you won't need to understand it to follow the rest of the article.]

Now, about those infinities that you may have read about — along with the scary-sounding word “renormalization”, in which infinities seem to be somehow swept under the rug, leading to finite predictions. These infinities, and their removal via renormalization, sometimes lead people — even scientists — to claim that particle physicists don’t know what they are doing, and that this causes them to see a naturalness problem where none exists.

Such claims are badly misguided. These technical issues (which are well understood nowadays, in any case) are completely irrelevant in the present context.

The infinities that arise in certain calculations of the Higgs particle’s mass, and of the Higgs field’s value, are a **symptom** of the naturalness problem, a mathematical symptom that shows up if you insist on taking v_{max} to infinity, which, though often convenient, is an unphysical thing to do. **The infinities are not the naturalness problem, nor are they at its heart, nor are they its cause.**

Among many ways to see this, one very easy way is to study the wide variety of finite quantum field theories discovered in the 1980s *(a list of references can be found in an old paper of mine with Rob Leigh [now a professor at the University of Illinois].)* These theories have minimal amounts of supersymmetry, as well as being finite. If you take such a theory (see Figure 8), and you ruin the supersymmetry at a scale v_{max}, while assuring the theory that remains at lower energies still has spin-zero fields like the Higgs field, you do not introduce any infinities. Moreover, there is no need to artificially cut the theory off at energies below v_{max} (as I have done in Figure 5, separating known from unknown) since in this example we know the equations to use at energies above as well as below v_{max}. The energy of empty space, and its dependence on the various fields, can be calculated without any ambiguity, infinities, or infinite renormalization. So — is there a naturalness problem here too? Do the spin-zero particles generically get masses as big as v_{max}/c^{2}? Do the spin-zero fields have values that are either zero or roughly as big as v_{max}? **You Bet!** No infinities, no sweeping anything under a rug, no artificial-looking cutoffs — and a naturalness problem that’s just as bad as ever.

*By the way, there’s an interesting loophole to this argument, using a lesson learned from string theory about quantum field theory. But though it gives examples of theories that evade the naturalness problem, neither I nor anyone else was able (so far) to use it to really solve the naturalness problem of the Standard Model in a concrete way. Perhaps the best attempt was this one.*

We could also repeat this type of calculation within string theory *(a technical exercise, which does not require we assume string theory really describes nature)*. String theory calculations have no infinities. But if v_{max}, the energy scale where the Standard Model fails to work, is much larger than 500 GeV, the naturalness problem is just as bad as before.

In short: getting rid of the infinities that arise in certain Higgs-related calculations does **NOT** by itself solve or affect the naturalness problem.

**Solutions to the Naturalness Problem**

On purely logical grounds, a couple of qualitatively different types of solutions to this problem come to mind. *[To be continued...]*

Pingback: A First Stab at Explaining “Naturalness” | Of Particular Significance

Great article Matt. More please…

Excellent Explanation. Is there a calculation as to exactly how unnatural our universe is or do we simply know that it is a small probability and the exact number is dependent on the specifics of various theoretical frameworks one may use? I want to know whether the probability is something on the order of one in one to the googol or something even much smaller than this.

You can do a precise (or rough) calculation if you make definite (or rough) assumptions. The real problem is that you don’t know the probability “measure”. One way to say this is that if I roll two dice and I *assume* they are fair dice, then I can calculate the probability of the dice showing 9 dots. But if I don’t *know* they are fair dice, I can’t calculate it. If I guess they are roughly fair, I can roughly calculate.

We’re in the situation of having at best an extremely rough guess at the probabilities, so we have, at best, an extremely rough estimate. But when you’re dealing with numbers that are this small, getting them wrong by a huge amount doesn’t change the qualitative conclusion: our universe, no matter how you calculate it, is very unusual, on the face of it.

I thought the idea of random physical constants was still speculative. The basic theory, if I understand it, is that we began as a fluctuation within a multiverse from which many other universes may also have been born; is that right? But what if our universe turns out to be — as most physicists seemed to believe when I was at Berkeley — the only one? In that case, maybe you will have to wait for more experimental data, or some mathematical revolution, before you can say anything definite about the values of the constants.

Correction: sorry meant to write is the probability one in a googol, not one in one to the googol: that would be silly.

Matt,

Do you have values for the fractions?

I’m having a hard time estimating the number of balls in the other two figures

See the remark by the mathematician below, and my (upcoming) reply. We cannot calculate the fraction without defining a probability measure. Nevertheless, when you are dealing with numbers this gigantic, you can see you have a serious problem even if you don’t know if the problem is one in a million trillion or one in a trillion trillion or one in a trillion trillion trillion. The point is that it’s certainly not one in a thousand.

Seems to me that this is heading in the same direction as the anthropic principle. Do you think the two ideas, ie anthropic P and Naturalness, are the same, linked, or totally different.

It’s very important, before you start talking about solutions to a problem, to make sure you understand the problem. I cannot answer your question without having gone further in this article; please be patient.

As a mathematician (not a physicist) I find this argument rather unconvincing. It depends fundamentally on the existence of some meaningful measure on the parameter space which is not too far removed from the scale in which we choose to express the model. After all, if you apply log enough times, any numbers become the same order of magnitude. Are we certain that the whole problem isn’t just caused by a misrepresentation of the parameters? It’s not as if we can examine the parameter space experimentally.

Of course there is something important to explain, but the same is true for all the other parameters of the model: why do they take precisely the values we measure in experiments? Either there is a deeper theory which explains them all (presumably including the very large ones) or we are part of a multiverse, and anthropics explains away anything.

Can the whole problem be reduced to “big numbers are more important than small ones”?

The argument is NOT entirely convincing. But it is a strong argument nevertheless, because the numbers involved are so huge — typically something like 10^{-32} or so — that you’d have to have a hugely convoluted measure on the parameter space to make it fail. In short: if it is a problem of “a misrepresentation of the parameters”, then it’s a huge problem whose solution will likely earn someone a professorship at a major university, and probably a Nobel Prize if it can be shown to be true experimentally. Certainly no one has ever proposed a re-representation of the parameters which, without adding new particles accessible at scales fairly close to the Higgs particle’s mass, would bring us even close to solving the problem. It’s easy to

sayit’s just a problem of the measure — but give me evenoneexample where this would solve a naturalness problem in quantum field theory, and I’ll be extremely impressed.What makes the argument much more convincing is that there are solids and fluids to which similar arguments apply. The equations are of the same type (Quantum Field Theory) and there are higgs-like scalars (which are massless at phase transitions.) If you take a random solid or fluid system away from a phase transition, and ask if it has scalar particles that are vastly lighter than other massive degrees of freedom in the system, the answer is “no”, unless they are Nambu-Goldstone bosons (which the Higgs in the Standard Model is not). The same is true for the few examples in particle physics. The Standard Model (if indeed it is the complete theory of nature) is unique in this regard. I’ll discuss this more later in the article, I think.

Of course a selection bias (i.e. anthropic principle or something similar) is one possible explanation… but an incomplete one within the Standard Model, because appealing to a selection bias ***also*** requires you to know a probability measure within the multiverse… perhaps the dynamics which causes one set of parameters to be realized more often than another… so it doesn’t resolve the problem you mentioned for the naturalness argument.

As for your last question (if I understand it correctly) — this kind of thing is under active discussion, of course. The answer may be yes. How will we verify this, however? That is the question that has to be addressed… otherwise, it will remain speculation.

Thanks! To expand on my last question, my training is in logic, and to me, any number (except perhaps 0, 1, e & pi) begs for an explanation. I’m uncomfortable with the idea that small numbers might just be facts of life, but large numbers can’t be. Information content is not dependent on magnitude.

But logic and math is different from physics. For example, you may be 1.24534 times taller than your wife. Does that require explanation? No. Why not? Because this ratio is determined by a combination of a

dynamical equation, on the one hand, andinitial conditionswhich are not given by pure numbers, on the other.In general, in physics, dynamical equations (i.e. equations that describe how things change) assure that most pure numbers that we measure are of the order that we would (with experience) guess, up to factors between, say, 0.1 and 10. Sometimes you’ll see something as small as 0.01 and 100, just by accident, or even a bit greater.

So I would claim you’re profoundly misled by thinking about physics as similar to logic or number theory. It’s not… it’s dynamical evolution, and most results of physics problems are not nice numbers like 1 or pi or even e^pi.

The notion that extremely large (and extremely small) numbers require explanation has a history going back nearly a century and has been extremely profitable for scientists. No explanation is needed for numbers like 3.2435 or 0.543662 — it’s not a matter of “information content”, it’s a matter of whether the dynamical equations we know are likely or not to spit out such a number.

OK, but doesn’t that mean you’re assuming that there is some kind of dynamics in the parameter space? That it is governed by laws of the same character as the laws of physics, even though we can never test those laws in any way?

Going back to your vase/table analogy, aren’t you also assuming that we are at a stable point in the dynamics of parameter space? I.e. that the vase isn’t still falling (with respect to parameter space dynamics)?

This would make sense if you assume that the dynamic parameter is our own familiar time, and all this happened before we could measure any of the parameters of the model, but given the local nature of time in GR, this seems quite a strong assumption.

That point — about dynamical equations — helps me understand better what you are getting at.

When the number represents the odds against an event that has demonstrably occurred, then of course bigger numbers demand more of an explanation than small ones. If you disagree, I’d be interested in hearing your reasoning why over a game of $100 minimum-bet craps. I’ll bring the dice. ;)

This is precisely my point: when you play dice, you know that, however biased the dice may be, there is still some probability measure governing their fall. Even if you don’t know what the odds are, you still know that the dice behave randomly, so you can use both your experience of games and statistical theory to reason about what’s happening, and to judge when the game is fixed.

When you consider the basic parameters of your model of physics, you don’t have that. If Einstein was right, and time is part of the universe, then the question “When were the parameters determined?” doesn’t make sense, because there wasn’t any such “when”. Even the question “How were the parameters determined?” assumes that there is some mechanism that operates beyond our physics to determine them. Saying that large numbers require more explanation than small ones is making assumptions about this mechanism.

I simply want to know what the assumptions behind the naturality argument are. I don’t think it follows from just assuming that GR + SM is all there is.

If physics is dynamic, the parameters can be determined anytime. As I understand it, in some models the particles “freeze out” as the temperature of the universe dropped. And that could happen differently in different places. (So called landscapes of, say, string theory with many possibilities for physics.)

It’s actually about whether or not you need any such parameter-forcing mechanism exists at all. GR+SM allow the parameters to be anything, with no preferences. The values are arbitrary.

If the numbers were small, as in the ratio of the size of the total parameter space to the subset that created a recognizable (i.e. has atoms in it) universe was reasonable, then you could continue to say that the values are arbitrary, i.e. no mechanism behind them needed, the Null Hypothesis, Occam’s Razor, and getting a universe like ours is still no surprise.

When the numbers are this ridiculously huge, then you do need some mechanism to explain how the parameters ended up this way. You either have to figure out some “mechanism that operates beyond our physics” to force these parameters to end up in this particular state, or you have to rely on the Anthropic Principle to beat the long odds.

So the assumption is to *not* assume a bias in the parameter space, *not* to assume some meta-physics mechanism for forcing parameters or merely biasing them. That’s why it follows from assuming GR+SM is all there is. But then we run into the problem which suggests that can’t be true.

I realized might be saying “Well you’re assuming a uniform distribution, and why’s that assumption any better than any else”, so I wanted to make the point more clear: The less fine-tuned the parameters need to be, the less need there is to make any assumption at all. As in: Pick whatever distribution or rule for the parameters you want. Does it result in a universe vaguely like ours? With the big numbers we have, you need a very finely specified rule to get the parameters we have and figuring out which one would be a big problem. With small numbers, just about any rule would work, so you ultimately don’t need to assume any rule at all.

I don’t understand this point. Is the claim that despite Weinberg was able to predict the value of today’s vacuum energy (cosmological constant) it isn’t relevant to selection bias? And why would a full measure (as I assume the text is describing) be necessary?

A direct comparison would be biology and its selection bias. The full fitness space is never known, nor its underlying mapping to physics. To look for ecological niches in the coastal zone you don’t need to know the exact waves, the exact dynamics of waves, or the exact probability distribution function of waves. You need to know the extremes and the optimum of the “niche construction”, what the population deems fit and optimal.

I would have assumed a gaussian centered on the average between the habitable limits of unnatural parameters (cc, Higgs mass-squared parameter), a gaussian because of contribution of many other factors in the SM, would be the expected and needed probability measure for selection bias. Why wouldn’t that work?

[Disclaimer: astrobiological interest, hence cosmological interest.]

I don’t think we can claim that we can’t examine the parameter space of universes experimentally. People have suggested that in some cases of multiverses we could observe bubble collision “trace fossils”. So there are experimental constraints on this parameter space, whether the answer would be positive or negative.

It is my understanding that one of the assumptions that goes into predicting classes 1 &2 being much more probable than class 3, is that there is no new physics between the Higgs scale and Planck scale. If this is correct it seems like a non-conservatively radical assumption.

Of course it is. I made very clear what my assumption was: that there is the Standard Model plus gravity, and nothing else. The whole point is that the unnaturalness of the theory gives us reasons to think the assumption is wrong. But we have a long way to go before we understand the crucial subtleties.

Looks like , unless someone finds reason for unnaturalness (extremely small probability event) we are stuck with anthropic argument. I am not an atheist. So I do not mind!!! But some people are shocked by this and try to get out of this by multiverse argument. Question: what is your opinion on idea of multiverse which has practically zero chance of verification?

Do not confuse

weak-anthropic-principle arguments (which are statistical and do not require any discussion about where the universe [or if you prefer, "multiverse"] came from) with arguments abut the existence of a non-scientific (for example, divine) origin of the universe. There’s no conflict between atheism and theweakanthropic principle. Only the verystrongestform of the anthropic principle — “the universe was designed specifically for humans” — requires any discussion of who did the designing. This isnotthe form of the anthropic principle discussed today by scientists.As for our being “stuck with an anthropic argument”, you’re jumping the gun a bit. Remember the LHC hasn’t run at its highest energies yet and has collected only 10% of its total data set (in fact it’s now 1% of the total through the planned upgrade into the year 2025 or so.) We do not know that the Standard Model is correct; I’m merely pointing out how surprising it will be (from the naturalness point of view) if it is.

Sorry. I did not want to bring in religion in this discussion. But my understanding is that multiverse idea was brought in to get out of fine tuning (perhaps without any motive to fight anthropic, weak or strong argument). String theory also points to that. But until it is verified experimentally, it remains one of the hypotheses, which is ok with me. I would still like to hear your opinion about multiverse.

Stay tuned.

Hi Matt,

Interesting to see the conclusions in your final text version. I think that there is no naturalness after all (based on my hypothesis). I’ll get back to you on this issue later.

My humble view…

If there is no naturalness, there is no time dilation, there is no constancy of speed of light, in reference to it, we will be never aging.

If there is no unnaturalness, there is no “rest (invariant) mass” and there will be no matter ?

Why is that? Could you open up your reasoning a bit?

Time slowdown means, nearing the speed of light and ultimately the clock stop ticking – thus becoming massless.

Masslessness of standard model is naturalness.

The unnatural disturbance in simple harmonic motion of quantum fields (natural) creates the “rest mass”, due to quantum mechanics.

The mass of know particles in standard model is unnaturalness ?

Soothing explanation Professor,

what is physical reality, it is naturalness or unnaturalness ?

Is this really 100% sure, that the universe will not be changing any time soon? Elsewhere I’ve heard about vacuum transitions etc …?

It will certainly not change smoothly, which was really the only point I was making here. And it is unlikely to change soon.

Contrast this with the vase. A breath of fresh air or a little vibration could destablize it at any time.

Picky pedantry, but “…we live, quite naturally, on a rather ordinary planet…” is jumping the gun; we don’t know the probability measure any more than we do for the multiverse. And I suspect the Moon makes the Earth distinctly unnatural, even if Earth-mass planets are commonly found in stellar habitable zones.

OK, astrobiology.

– The planet mass and its habitability is frequent (at ~ 6 % of stars). But systems are individuals, and our sun is unusually large (at ~ 5 % of stars).

– No, these types of large binaries are expected because they are, well, natural outcomes of collisional accretion. We have many examples already in our system, Earth-Moon, Pluto-Charon and smaller binary asteroids. And they are a natural outcome of low velocity, equal mass accretion models, which predicts Pluto-Charon and is now a strong contender for Earth-Moon. (Meaning the ancestral Tellus and Theia planetoids were likely similar massed objects, an apriori most likely situation.)

– Large moons have little to say on short-term habitability of planets. The -90’s results that seemed to say they are, are now known to be faulty. Both abiogenesis and later evolution of complex life are possible outcomes.

Oh, re “don’t know the probability measure”, see the Habitable Exoplanet Catalog: we do, rather well, from 3 independent sources (transits, radial veolocity and microlensing).

According to your source: zero confirmed “habitable” Earth-mass planets out of ~1000 confirmed planets in the The Extrasolar Planets Encyclopaedia. Too soon to say “rather ordinary planet” in my book. :) And, based on the evidence to date, our solar system is pretty atypical, too. (Eight planets!)

Bringing habitability into it was a mistake, because I was only making a pedantic point about the “naturalness” of the Earth. I’m not sure Jo[e] Public would consider a planet with an orbital period of 28 days or twice the radius of the Earth as “Earth-like”, even if it’s capable of supporting life! (As aside, the Habitable Exoplanet Catalog doesn’t seen to account for tidal locking, and I would bet against life on those that are tidally locked.)

I think it would be even more incredible than predicting the Higgs for our model of planetary formation to survive the exoplanet revolution.

I’d value references on the third point about moons and habitability. Of course, now we “know” life had to evolve on Mars and then jump ship to Earth before Mars became uninhabitable… so now you need two planets! (No, I don’t believe that, but it would certainly answer the Fermi paradox!)

Now you have gone beyond habitability into inhabitation.

– Add Earth to HEC and you have that 10 % of habitables are inhabited. This estimate will slowly decrease until we find another inhabited planet, but it won’t decrease to zero.

– It is really finetuning if you expect just one planet out of a galaxy worth to be inhabited. About 10^-13 (since we have many planets and ice moons per star), which is nearly as serious as what we discuss here.

– The short period of time before life evolved on Earth shows that it is an easy process, so a frequent outcome.

To get back to habitability:

– As I said, the systems are individuals, planets not so much re traits like being placed in the habitable zone.

– “Tidal lock” is a tedious claim of “show stopper”, while we already know it isn’t. Climate models, as well as the example of nearly locked Venus, shows that dense atmospheres will nicely distribute heat without too much surface wind. So the frequency of habitables isn’t as large as it could be, but still large. (Especially since most habitables will be locked around M stars.)

– Large moons not necessary ref: http://phys.org/news/2011-11-life-alien-planets-require-large.html

“It seems that the 1993 study did not take into account how fast the changes in tilt would occur; … According to Darren Williams of Pennsylvania State University, “Large moons are not required for a stable tilt and climate. In some circumstances, large moons can even be detrimental, depending on the arrangement of planets in a given system. Every system is going to be different.””

No, the argument isn’t that you need transpermia. (But we know it could happen from estimates of hypervelocity impact escape masses and bacterial survival rate. We also know that many systems will have 2-3 planets within the habitable zone as we have, see the HEC.)

The argument is that you can evolve life robustly as it takes much less than 0.5 billion years, then evolve land life and intelligence in a similar calm period. Again, lowered outcome, no show stopper.

Also, I think the discovery of Earth global oxygenation and global “slush ball” glacial show that orbital stability is overrated. Life survives much worse changes.

I’ll add on your last point that the Fermi question is insufficiently constrained to be tested by negative outcomes. (A positive outcome would do.)

There will always be the problem of silent pathways, false negatives. E.g. if there is no generic interest to communicate. It is anthropocentric to assume there is.

So it isn’t a fully testable question, a well defined hypothesis, with a definite answer. We can only constrain (or hopefully verify) frequency of positives.

I don’t get it. Why limit ourselves to variations of the Standard Model? Why not imagine all possible models? Our imagination is limitless, after all. Of course, then all the variations of Standard Model will feel utterly unnatural compared compared to the enormous pile of Non-standard Models.

I mean, even if we assume the Standard Model is the correct description of the universe, than any other model, even those that are quite similar, are still as fictional as the Genesis.

Remember, the article isn’t finished yet.

You can try to imagine all possible worlds that are described by quantum field theory. [I have no idea how other worlds would work, so there's no point in discussing them; the Standard Model is a set of equations for which quantum field theory works.] Then my statement would have to be generalized to the following: divide the set of quantum field theories up into

Class A) those that have spin-zero particles like the Higgs particle, but no supersymmetry (and one other technical caveat that I’ll explain later in the article), that are “lonely” (i.e. without other particles around that specifically have to do with why they are light)

Class B) those that don’t

you’ll find the vastly overwhelming majority of the theories are in Class B. But the Standard Model is in Class A. So the mystery, now stated more thoroughly but more confusingly (which is why I chose, for pedagogical reason, not to jump that far in one step), remains just as serious as before.

It may sound trivial, but what if the universe is actually passing through cycyles 3-2-1 etc due to the change of higgs field value over time? If so, it would be perfectly “natural” to pass through a seemingly “unnatural” transitory phase such as the one we are in, sometime during this cycling process.

I don’t know much physics yet, but I feel like there’s something in this proposal that touches at the fundamental truth of the Universe hidden till now to the scientists.

It’s not impossible, logically speaking. Our scientific problem is this: how would we verify, experimentally, that this is the case?

Dear Prof,

Can’t we at least put forward some imposibility of implausibility arguments against my suggestion? If, as I suggest, the universe has been cycling between these three “phases”, could one say that, than, we should by now have observed some tell-tale signs at least of the “debris” of the previous phase, which, presumably we have not? Or perhaps one could “prove” by logical reasoning based on proven observations and analysis, that, once a universal phase change occurs due to change over time of the higgs field, the new “state-space” of universe would now provide such a strong “attractor” for the new status quo that, no further phase changes (new criticality events towards other attractors) would be possible theoretically, thus “extinguishing” the cycling process?

If, by above reasoning, this suggestion is proved to be false that that is the end of the story. However, if this is indeed possible, than why would one bother with naturalness vs unnaturalness arguments, perhaps inevitably(?) leading to multi-universe theories, which philosophically put the burden of “explanation” to the 3 Space(distance) dimensions, whereas, this apparent “unnaturalness” may be explianed by just reference to 1 D time dimesion? I do realize that I may be edging unduly towards self organized criticality theories (which I don’t belive is a holy grail of explanatory power for everthing anyway), but still, I believe this line of reasoning deserves some further scrutiny.

I guess, I am repeating Donald’s question.But I did not understand your answer to his question. In SM range of values of constants is probably known (am I right?) So what is it which makes it unfair dice rolling? I suppose, once you get into small numbers exact value does not matter!! But if you have large no of systems (multiuniverses or whatever) the net possibility may become reasonable. I have seen Penrose’s estimate of big bang starting with low entropy as 1 in 10^10^123. Other people who consider probability of origin of life as 1 in 10^50 to 1 in 10^150. Leaving aside question of life, is there any believable range of calculation for SM parameters (even rough!) ? Thanks.

I will talk a little bit about how we can put some precise numbers into the discussion; but in the end, there is no unique way to do this. And that could, perhaps, be the Achilles’ heel of the argument. [However, I remind you that the argument has always worked before in all previous examples...]

The likelihood for chemical evolution proceeding to biological evolution is of order 1*, seeing how fast life was established here and consonant with the known homologies between ocean water and cells (frequency of commonest elements, i.e. CHNOPS) and between pH modulated alkaline hydrothermal vents and early chemoautotroph metabolism (Lane & Martin 2012). If abiogenesis was hard either of those would not be seen, instead it was fast and used the common elements and redox sources at hand on habitable terrestrials.

* You can make that formal and testable with stochastic processes. Or at least as testable as stochastic processes are verifiable in industry from single samples…

So on the one hand the standard model ist a profoundly unnatural theory, on the other hand its the best our scientists got? That reminds me a bit of the ptolemaic world view, where the planets and the sun had to carry out weird movements in order for the earth to be in the center. Or the phlogiston theory, where “phlogiston” with negative mass was postulated, in order to explain the weight loss when burning substances. Both “correct” and useful within their limits, transitional theories, but something profound was missing and it was back then not obvious what it could be.

So, I would not be surprised if someday in a far future the standard model will be replaced by something quite, but not completely different. Like a reformulation from an different viewpoint. But I dont believe I will witness that, it seems too far out.

On a different note, to state the limitations of the current scientific world view so clearly means credibility to me. Best defense against all kind of conspiracy theories wich typically are about the ignorance of “mainstream” science.

Best regards

It’s amazing how, on the interweb, you find people who’ve had

exactly the same thought! I’ve looked at the SM for years and thought “epicycles” and “phlogiston,” that there has to be some key piece of understanding that’s just not right. The fundamental conflict between GR and SM feed into that feeling; one of themhasto be, at the least, incomplete, right? (Emotionally, I want Einstein to have been exactly right and SM in need of an overhaul.)As I understand the naturalness conundrum, either there must be a lot of universes (meaning we’re not special; just another member of the pack)

orwe need a reasonable explanation for why the dice came up with a very rare number that allows us to be here to be amazed that we’re here. If we’re not special, no problem, but if we are special… then… wow!I’ve been reading about the Rare Earth Hypothesis and the idea that multicellular life may have been incredibly unlikely, let alone intelligent life. Some feel the famous Drake Equation factors should be such that the probability of intelligent life per galaxy is significantly less than 1. We

maybe hugely special, and that does seem to beg for an explanation.What bothers me is that The Blind Watchmaker logic seems sound. The eye was a poor example, as there is a clear evolutionary explanation. But more and more it seems that life itself, perhaps even physics itself, is a kind of weirdly complex, inexplicable watch. It’s hard not to think it was made or directed in some fashion (as preposterous as that sounds).

Lee Smolin’s idea about the evolution of universes is interesting, but if universes evolve to have the physics laws they do, what is the physics context in which this evolution takes place? Is there a meta-physics universe where universes evolve? (Isn’t that the Turtles problem?) I wonder the same thing about the BB. Under what laws of physics was the BB possible and take place?

[Long time reader, first time commenter, retired software designer with a lifelong interest in physics.]

More astrobiology.

I find the REH daft, since it is an open-ended bayesian model. Just find the posterior you want between 0 and 1 by adding factors.

And the REH original factors have all already been shown to be erroneous. (For example, the need for a large moon, se my previous comments on that.)

To me, time dilation doesn’t mean that particles come massless. There is two factors in time dilation, the strength of gravitational interaction and the velocity in it. If we take a cesium-133 atom near a huge gravitational source where it “ticks” much slower than here then what’s the mechanism that makes its mass smaller?

In case of near c velocities, again what mechanism would make it have smaller mass? Ticks slow down for sure, but it has nothing to do with subatomic masses (nuclei, nucleus, electron).

Matt, why is the cosmological constant “unnatural”? After all, its value is quite exactly the inverse square of the horizon distance (“radius of the universe”). This is a quite natural value to expect.

I’m sorry but you missed one state of vase in your analysis: the vase is in its fall down. Maybe our Universe “is falling down” between two “natural” states and maybe the notion of time does have any meaning only for this “period” (and this is why we believe that the falling state take long time).

I think a falling vase is rare when you compare it with the other two cases of natural states (on the table or on the floor) but it is a natural state too.

Maybe the naturalness problem of our Universe is a false problem only because we just don’t understand yet what it is really happening.

I think that the most fundamental criteria for any universe is ,, meaning ,,……so no universe can exist without a sapien creature to grasp it and give meaning to it , no mathematical or logical proof can prove that statement as false.

Proving it as true is direct logical result from existence of homo sapien himself.

On the other view why it is not the case that our universe is the natural , normal ,expected one ?and all other ones are the weird non existent cases?

Why we ignore the obvious , the universe is un-natural since it was generated by conscious will not physical forces.

How can science refute this ?

But what would you say in the case where universe (forces, particles) emerge, without any intelligent intervene?

Great,,,,show me one single case or equation where physical output is generated without Any , I mean Any input of fields , particles or forces …….

If you want an example then do some googling or follow the link. Of course there is something behind the big bang but after that everything emerge without any outside input.

Typical atheist response…….I SAY : NO input whatsoever ….no physical before the physical , this is the ONLY way to prove ex-nihilio physical generation.

No single post or paper ever started from real fundamental absolute NOTHING to generate physical phenomena , it is logical absurdity to state it , ….

Prove me wrong , give me just one equation without any physical factors generating physical aspects….DO IT if you are so sure .

Did you read my previous response? Apparently not. Or at least missed my point.

The. Only logical ,simple , scientific explanation is to admit Divinity of creation then study it as deep as you can , I think we must admit that any pure scientific justification for actual , pure , real nothingness to generate any thing is just squaring the circle…….. Grave , false absurdity.

REMEMBER :

All , yes ….. All scientific papers started from something…….no single paper would start from nothing as it will end after The first line.

Prove me a liar .

Matt. :

How are you , it have been a long time since our last conversation…..

Your vase example proves Divinity case as only omnipotent effector can adjust line of center of gravity with line of support and KEEP it that way…………thanks Matt. You proved that our cosmos needs extra-cosmic effector by just comparing it to the vase case……….Great

Then do you agree Kimmo, that generating our cosmos needs something behind it ?

Yes I do agree. But it doesn’t have to be any “higher power” type of thing.

Then what ? There are none that you can find …….it is an infinite regression with no real solution.

Matt,

I understand that many theorists are upset (not merely puzzled, but upset) at the elusiveness of patterns/principles like “naturalness” in evidence from experiments. What puzzles me is why it’s so upsetting, when we _know_ that the SM and GR are both incompatible and incomplete, with something like 95% of the contents of the universe not even hinted at by either theory. (In fact, we have no idea how a Lambda-CDM universe could EVER have come out the way it did — e.g., how an “inflaton” field’s potential got set, or why it evolved and stopped and converted its energy into particles the way it did. We don’t know ANY of that.) So I still don’t understand that reaction of being “upset” after reading your post so far.

(In fact that’s something else I don’t understand: You seem to imply that we can make predictions with both QFT and GR/gravity somehow combined that give us an idea of what we should expect as happening “naturally”, when the basic fundamentals of space and time and energy in the two theories are contradictory, both in their concepts and in the way they must be applied mathematically. I thought that the only way to even start to make the “quantization” of GR work is to restrict it to 2 dimensions; otherwise, the math falls apart — which is why all the effort has been expended over decades trying to figure out how to somehow correlate or map a 2D universe to a 3D + 1 universe in some homeomorphic way, as discussed in other posts. I wasn’t aware that such a combination of theories was required to get the reasons for expecting “naturalness” in a QFT.)

I can understand the hierarchy enigma. I can understand “Oh damn! Guess I was just wrong” and “Guess Nature follows some other pattern, not this one” and “So, Where do we go from here?” But other than the standard (and insulting) ploy of blaming people (like Arkani-Hamed, and perhaps even you) for just being miffed, after years of careful work, that their pet ideas haven’t turned out to be applicable to the real world, I’m still not sure why there are such extreme reactions:

What, in terms of actual contradictions or conundrums within the theory, are there if some kind of supersymmetry isn’t still an option? That is, other than “that’s just the way things are”, or “That’s strange…”, what is it that is being impllied by the lack of (detectable) supersymmetric particles that is so “shocking”?

And, Why is the absence of such fields any more shocking than their presence if we were to find them?

An aside: I fully agree with (and hope everyone supports) the idea of pursuing an idea thoroughly, working out its ramifications, seeing where that leads and coming up with testable ways to corroborate or invalidate that idea (or parts of it) — even (perhaps especially) if that process is hard and/or subtle. Whether you think supersymmetry is a fundamental principle or not, of course we should explore every avenue to investigate it as a possibility. But as mentioned elsewhere, I am reminded of 19th century thermodynamics and the theory of caloric: prominent scientists (and many not-so-prominent ones) spent their lives coming up with the mathematics of heat as a fluid. Scientific papers on caloric were still being written and printed in the first decade of the 20th century, 50 years after the mechanical theory of heat as kinetic energy of molecules and atoms had been shown to be more accurate. Some of that work carried over; some of it didn’t. I also understand the difficulty in “threading the needle” (as you have put it) with new theories that might replace the SM and GR, while meeting all the constraints that we now know about. I just hope that as much emphasis is being applied (and funding allocated) to finding the next “particulate theory of matter and heat” as in exploring the older ideas that don’t seem to be working out, and that will need to be replaced/modified anyway.

I think you’re wrong to use the word “upset”. “Troubled” is a much better word. It disturbs our sleep — it seems that we’re missing something. But “upset” is the wrong word. Yes, there are some people who engage in polemics and call each other names. I bear no responsibility for such idiocy.

A lot of the upset that you are perhaps referring to seems to have to do with whether or not supersymmetry is right — and indeed you seem to assume this too in the way you wrote this comment. But supersymmetry is only one possible solution to the naturalness issue — perhaps it was once the most plausible, but after the LHC’s 2011-2012 results I’d say it’s now no longer the obvious front-runner. I also bear no responsibility for the people who love and who hate supersymmetry. Those people, perhaps, are “upset”. [Though not as upset as the people who love technicolor, which predicts no observable Higgs particle.] I think such emotional debates are silly and demeaning to the scientific process. I’m interested in nature, not my pet theory or someone else’s pet theory. I neither love nor hate individual theories; I just want to know if any of the ones we know actually matches the real world.

Now, back to the issue at hand. I haven’t finished the article, so you’re running way ahead of the game. It takes time and careful thought to appreciate this problem; the typical graduate student (for example, me, back in the day) has to think about it and about the various possible solutions to it for a few months before really appreciating just how hard it is to solve within a quantum theory. There are only a handful of solutions (all of which I’ll explain) and all of them — except a “selection bias” (also sometimes called a “weak anthropic principle”, though the use of “anthro-” is misleading) — predict new particles accessible at the LHC sometime in its current or future runs.

So trust me for a moment when I tell you that it’s really hard to solve, and that any solution within theories of the class we know would predict new particles at the LHC. If we don’t find them, then either

1) there’s a selection bias (which is a statement about the whole universe, and therefore quantum gravity)

2) or the naturalness criterion derived from decades of study of quantum field theory — which has not failed before, mind you — does not apply, at least not in its strongest form, to the Standard Model

3) or there is a solution to the naturalness problem which violates the known principles of quantum field theory (i.e., we must go beyond quantum field theory not just for quantum gravity but even for the physics of the Higgs)

Concluding that at least one of these is the case — which will take a long time — would be a very big deal, conceptually. It would change the questions that scientists ask, and could mark a major shift in the history of the field — much like the Michelson-Morley experiment was a watershed event (not so widely recognized, though, at the time) whose impact on our understanding of nature was immeasurable.

So yes, there’s a great deal at stake. I don’t see the point in getting upset about it, but troubled? Sure. Maybe the solution is right under our noses and we’re looking right past it — and some young Einstein-II will deliver the solution one day. Or maybe we’re doing the wrong sets of experiments, and the solution will be revealed accidentally in an experiment that wasn’t intended to address the question at all.

So a small average Higgs field value implies small Higgs boson mass, and a huge average Higgs field value implies huge Higgs boson mass, which is why of course the LHC thought it might find the Higgs. I’m sure you must have explained theoretical reason behind this in one of your previous posts.

Pingback: Adding to the Naturalness Article | Of Particular Significance

Matt.:

But it is clear that deciding what is expected and what is un-natural is 100percent subjective , for example , for me the most (natural) universe is like ours since in my criteria a universe without sapien being is not selected , as you see ….even selection force is subjective.

Conclusion : there are no solutions to naturalness assumed problem as in fact there is no problem in my subjective view.

PS.

I mean by subjective your decision that such and such properties are not expected biased by you priori decision that ANY kind of universe is possible which no theory can prove.

Now hear this: what about the QF fluctuations that are extremely tuned to result in the UP ? You would say it is extremely un-natural . But what if there exist a selection force that renders that relation very very natural ? Our physics cannot tell……

Physics beyond physics may tell.

For physics to solve this problem it must prove the impossibility of a meta-selector mechanism that selects for life/mind friendly cosmos only , rendering all other ones impossible ( extremely un-natural with zero sigma).

Matt, about Fig. 4: How do you measure the Higgs field’s average value? For electric field, you can measure the force on a charge placed in the electric field and calculate the value of the electric field at the point. Is Higgs field’s value also calculated by measuring the force on a charge or a particle etc.?

The second part of this article is very interesting and helped me understand some important details. I’m really looking forward to the next part.

Matt, I HOPE I am not a proverbial pain in the neck for you. That would be the last thing I want. I’m trying to comprehend what the vacuum is, why it is black, and why it appears to be nothingness when obviously it is something very unique in our physical world. For example; it withstands such a physical abuse from supernovae explosions, constant radiation of the worst kind, torquing by the gravity and the mad speeds and spin of celestial objects. Could you please write something about the vacuum. It is the BIGGEST body known to us, at least here in our universe. I know that quantum fluctuations are happening on a quantum scale, not in an everyday cup of space that we can see and observe. The virtual particles are said to be popping in and out of existence so much that this phenomenon is viewed as a constant of a vacuum (cosmological constant). I deduct from what little I know, that vacuum is a medium, just like air or water are here on earth, that can absorb things, store the radiation and make it a part of itself. When physicists observe the virtual particles, they must do that test in some depressurised cylinder. Could that be the reason for the appearance of the virtual particles, Depressurized vacuum is actually experiencing the pressure created by the walls of the cylinder. The energies build-up, could be resulting in a creation of fake particles because vacuum in space has to move/ flow/ breathe. Is there any evidence that vacuum in space also produces these virtual particles? Has this been proven? Hope you can answer this one, professor.

Many questions, and it has been some days. So short answers:

– The vacuum here, for particle physicists, is the sum of all particle fields. Other fields have other definitions (e.g. low pressure vacuums, astronomical vacuums, cosmological vacuum).

– It is not black but can be seen locally as a relativistic black body emitter in some cases. (Eg Unruh radiation for accelerated objects; Hawking radiation around black holes; static and dynamic Casimir effect.)

– The vacuum is not a body, it is a system of fields. Magnetic _field lines_ snap and reform if stressed enough, see Wikipedia – such field lines track existing particle behavior. But the particle fields are pervasive, they don’t disappear.

– Virtual particles (a misnomer) is observed through the behavior of the electric field close to charges. See Matt’s articles. You don’t need vacuum for that – I suspect you are thinking of particle accelerators which innards are pumped to low pressures for technical reasons.

Your physics speculations in the rest of the comment are not contacting any of what we observe and know about the actual physics. I recommend Wikipedia’s articles on low pressure systems and hope they help for the basics.

Well, that doesn’t answer my question; Is there any evidence that vacuum in space also produces these virtual particles? Otherwise we humans/ you physicists are trying to fit-in the nature into your theory. Most of it fits, I hear with exception to cosmological constant. If it doesn’t fit, discard it. Problem solved To say that quantum fluctuations permeate all space from far, far away galaxies to spaces between the atom and it’s electron and even further down into the spaces or rather fields of quarks and gluons, all and any space you can imagine is fine with me. I won’t lose any sleep over it, but what I see wrong is you people trying to calculate the spaces of the universe you have no means of verifying. What is the principle here, that energies of the universe have to balance out? All the positive energy of the fermions (which are just another state of energy) has to cancel out all the negative energy of the universe, which is the energy of the forces (their fields and bosons). WHY? If these two cancel each other out, don’t you get a static universe? This universe is on the go. What’s driving it, forces or fermions? If fermions are energies in another state then everything there is is energy. There is no need to balance anything out. The problem is not in unnaturalness of nature but in unnaturalness of an approach. You are trying to solve a problem that doesn’t exist in nature but is created by your way of looking at things that make up the nature of this universe. First solve the problem of gravity. Is gravity truly the fourth force or is it an effect caused by other factors. In that case universe will have only three forces operating upon it. Yes, that would make your problem even worse because without the gravitons you’d have less negative energy. The ratio could be as low as one to two. Why? Due to all the cancellations that different forces have on each other. I think that nature of the nature is not to be precise, but it’s precise enough to hold it’s shape and do what it’s suppose to do. In your theory though, nature is precise. It’s rigid. That’s what’s unnatural. If I weigh two ounces or pounds less in a highrise building then when I reach the ground floor, I’m none-the-wiser. Nature has imposed some strange effect on me without me even know it. I’m still me, holding my general shape. Same principle applies to the universe, I should think. So, the problem of naturalness is man-made.

In its simplest form there is only one phenomenon which causes all the forces, it’s the spinning of objects. If you think about it, what other phenomenon it could be? Funniest part comes when you derive a model based on spinning, everything comes crystal clear. Try it for a fun :)

“The vacuum here, for particle physicists, is the sum of all particle fields.” Yes, I realize that, but by a wikipedia’ definition, vacuum is a space free of any matter. It mentions perfect vacuum and partial vacuum. Regardless of that, vacuum suppose to have fluctuation of energy that can be measured and observed. I don’t question that. If you are a physicist, tell me where I’m wrong. If we say that universe can’t have more positive energy than negative energy, that is assuming that we had a Big Bang. We have to start to visualize another picture of how universe started, the possibility that mater popped out out of the vacuum itself and that vacuum existed before the matter. The current quantum fluctuations could be the end state of the vacuum that could have been a lot denser before. SM is ‘as if’ designed to eliminate the concept of our universe being created out of the empty space vacuum that might have existed forever. You say; BB created space vacuum. give me some proofs of that. CMBR means nothing of the sort. You are looking at magnetosphere of either this galaxy, cluster of galaxies or universe itself but how can you know for sure which one. The weak radio energies prove nothing unusual, least that BB occurred. Tell me first where all the photons go that don’t get absorbed by matter? Tell me if universe has a magnetosphere or that there is no difference between the space of our universe and hyperspace or whatever you wish to call it, if multiverse is reality. If BB never happened, then it changes the picture of what reality is. Unless anyone of you physicist is willing to forget for the moment all you have learned and accepted to be the truth of reality and take a fresh new approach to this universe and nature of it, you may never solve this problem of seaming, (emphasis on seaming) unnaturalness of our universe. Yes, and please don’t minimize the importance of anthropic principle. That’s where your answers are. If you get lost in the detail of the picture, you may miss the whole meaning of it. (Too close to the trees to see the forest?)

I don’t get it. We don’t know the probabilities for any of this, so where is the problem? It’s like finding a melted 20-sided die with the number 20 showing, and exclaiming “huh, there’s only a five percent chance that this would exist!”.

I think Matt’s comments above shows that you _can_ estimate the problem:

“We’re in the situation of having at best an extremely rough guess at the probabilities, so we have, at best, an extremely rough estimate. But when you’re dealing with numbers that are this small, getting them wrong by a huge amount doesn’t change the qualitative conclusion: our universe, no matter how you calculate it, is very unusual, on the face of it.”

Oh, and when you make an appeal to frequencies of events (die throws), you would here make an appeal to multiverses.

That’s just the thing, though. You assume that the die was thrown before it melted, or even that it’s a fair die and not one with “20” written on all sides.

And what Matt is saying, if I understand him correctly, is that that we don’t know whether there were ten thousand sides or ten nonillion sides on the die, but either way the chance of rolling a 20 is pretty low. Again, this is based on the potentially false assumption that it’s a fair die.

Oh, so you put some significance in the “melted” part. You are speaking of “fair”.

Well, Matt was considering a fair dice, he could calculate rough probabilities within our universe. It seems to me it is you that has to demonstrate first if a multiverse exist (die throws), and then if unfairness exists, and then if it applies to Matt’s problem, and then if it is better than Matt’s estimate. Lots of.”if’s” you take upon yourself.

Pingback: (Un)Naturalness, Explained | Of Particular Significance

Excellent explanation as usual, Professor Strassler. You really do have a gift for coming up with apt analogies.

In the spirit of helping you polish this article, I have noted some typos in the most recent addition.

One is in the text adjacent to Figure 5, “It’s as far as one could can push the Higgs field’s value up”

One is in the text adjacent to Figure 8, where you describe someone as being “now a professor at the University of Illinois Professor”

I also have an observation – throughout the most recent section, you claim that if vmax is near 500 GeV, there is no problem, and that we do not know what vmax is. So isn’t the obvious solution that vmax is near 500 GeV?

This most recent section of your article does a good job of convincing me that the naturalness problem increases as vmax increases above 500 GeV, but you have provided no evidence that vmax is indeed greater than 500 GeV. Is there some reason we have to think that vmax is so big that there really is a problem?

The obvious solution is

indeedthat vmax is near 500 GeV. And if that is true, the LHC will discover as-yet unknown particles, and other predictions of the Standard Model will fail as well. The strongest evidence against it — inconclusive at this time — is that the LHC has not yet discovered any such non-Standard-Model particles, and there are no known deviations from the Standard Model at the current time. Arguably we should have seen subtle deviations already by now. But I will get to this issue soon.Thanks for catching the typos!

This is exactly similar to my comment on the (explained)post , I guess there are some confusion here .

Tony (Rácz) Rotz

There is no such thing as a vacuum, so the the real question is do we know what all the fields are and how they interact? Therefore each yet to be known particle is an indication of a field that somehow interacts with other known and unknown fields? Would a black hole singularity be in essence, matter broken down to its constituent fields and tied up, or not, or compressed into a knot? As to a Creator or not, I have had my own personal experience, however it is best not to argue over something that cannot be accepted by many for their own personal reasons. This should not be a blog about Religion.

There is such a thing as vacuum — the universe is much emptier than you think. Remember that only 1/10^15 of the volume of an atom is occupied by the nucleus. And we are calculating the effects of fluctuations whose distance scale is at least a 10^9 times smaller, in volume, than the nucleus. So the universe has lots of vacuum, on these distance scales.

The old grandmother of 15th century comes back to see her grand … grandson, the Apple boy. The following is the conversation about this unnaturalness issue.

Apple boy: G-grandmother, I just read a great article about the unnaturalness of Nature. Every argument in it makes sense. But put all together, it just doesn’t make sense to me. How can Nature be unnatural? You have sat right beside Nature for the past 500 years. Can you help me on this?

G-grandmother: Oh, the only thing unnatural to me during my last visit in the 17th century was seeing a man who walked on a rope 100 feet above the ground.

Apple boy: It is unnatural to an average person in their ability to perform such a stunt. But, it is simply allowed by the laws of Nature.

G-grandmother: Well, I really see some unnatural things in this visit. When I came down from upstairs, I saw many people in a bird-like metal box flying in the sky. Then, I saw many people talking from a hand-held plate.

Apple boy: Indeed, they are some kinds of unnatural as they were invented by humans. We call them artificial inventions. Yet, they are still parts of Nature as only laws of nature allow their existences.

G-grandmother: Last night, I saw a video about Ptolemy model of the universe, and all stars are dancing in different patterns. How can a star dance like a dancer? It is truly unnatural.

Apple boy: That model put the *center* of the universe at Earth, and it is an unnatural way of doing it. Indeed, that model is unnatural which means *wrong* today.

G-grandmother: Hi, boy, you seemingly know all answers. Then, what is your question?

Apple boy: Matt showed a vase/table analogy and said that our universe sits at the unnatural situation (the rightmost picture).

G-grandmother: Why is it unnatural? For a few hundred years, I have travelled with my nephew Jedi all over this universe. And, most of the time (99.9999…%), the vase are not sitting in the natural cases as described in his analogy. His saying is true only when my spaceship landed on Earth. There must be some unnatural force around Earth.

Apple boy: Okay, okay. No analogy. But, please read the entire article. The argument is very strong individually, especially about the Standard Model.

G-grandmother: Hi, boy, it took me awhile to read it. No big problem, but a major confusion. There are only two questions.

a. Standard Model is unnatural (meaning, it is wrong).

b. Nature is unnatural (meaning, … nuts).

Apple boy: The issue is more subtle than that. The Standard Model has three parts.

Part A — A zoo of particles (especially the 48 matter particles) which are verified by tests.

Part B — A set equations which *fits* the test data by hand-put many parameters into the equations.

Part C — A reverse-engineering which produced Higgs mechanism.

If Standard Model is wrong (unnatural), which part is wrong?

G-grandmother: Part A is message directly from Nature. Part B is artificial but works for Nature. Only part C is the suspect of the problem.

Apple boy: But, Higgs boson was *discovered* on July 4th 2012.

G-grandmother: Indeed, LHC found *something* on July 4th 2012. But, 14 months later, we haven’t even officially established evidence for the Higgs to bottom quark pair decay (which is one of the *golden channel*) at all.

Matt’s argument is all about the *mass* of that something (he calls it Higgs boson) vs the *vacuum energy* (he calls it Higgs field) of the empty space.

In one other model, the mass of that-thing should be [(1/2) of the vacuum energy + some transformation barrier], as that-thing is a blob of [a vacuum state to a new vacuum state] transformation.

This is an unsettled issue and thus no need to go into any further.

Apple boy: Okay, let’s put the Higgs issue aside. The unnaturalness can still arise in the case of *multiverse*.

G-grandmother: *Possible universes* was a very old philosophical topic. The evolution *pathways* for the universe is zillions (infinite to be exact), but the *history* of the universe is unique, only one history. That is, there is only *one set* of laws of universe.

The *multiverse* in the article is about having many different sets of laws of nature. They got this idea from the concept of *fine-tuning*. If a set of laws can be tuned, it becomes many different sets.

Apple boy: *Fine-tuning* is definitely a part of nature. If we change the nature constants very slightly, this universe will be dramatically different.

G-grandmother: Well, this is another major confusion. Nature is very, very precise, locked by Alpha (a dimensionless pure *number*). That is, no *dimension* of any kind can change it. Preciseness looks very much like fine-tuning but cannot be tuned.

Apple boy: Thanks G-grandmother. Now I understand the issue which has only two questions.

1. Which one is unnatural —- Nature or the Standard Model

2. Can preciseness be tuned?

Apple boy: I think that you have swept three very important points off the table by questioning about the Higgs mechanism, didn’t you? G-grandmother.

1. Three Higgs (or other) field classes (on, off, on/small)

2. Quantum fluctuations of quantum fields, and the energy carried in those fluctuations (the vacuum energy)

3. Why It Isn’t Easy to Have the Higgs (or Higgs-like) Particle’s Mass Be Small (the summation of all different fields and the magic cancellation)

G-grandmother: I was a farm lady, you know. I know everything about *fields*, the corn field, the potato field, the sheep field, the dog field, the fish field, … the ocean field, etc..

Apple boy: Come on, G-grandmother. A herd of sheep, a pack of dogs and a school of fish, not fields.

G-grandmother: Okay, my bad. Just exclude those then. But, for all other fields (corn or the whatnot), I could turn them on or off as I please, by plowing them out or seeding them in. If you can move this Earth into Mercury’s orbit, I can even turn the ocean field off.

Apple boy: What is your point?

G-grandmother: Just a bit Buddhism here. All those fields are transient phenomena. Their on or off have no importance for the eternal reality. For me, there was only *one* field, the surface of the Earth, and it cannot be turned on or off (so to speak). And, this true field is a tad bigger than all those *fields* add together. So, those summation operations of all those different fields (top quark field, etc.) do not make any difference for the true Daddy field which cannot be turned on or off. By the way, if a field can be turned on or off, it cannot be the true Daddy field.

Apple boy: What is the true Daddy field for this universe?

G-grandmother: Now, you ask a right question. It is the space-time sheet (field). All matter particles are protrusions from the space-time field. When an electron protrudes, it forms an electric field.

Apple Boy: So, Higgs field is not space-time field. Is there anything wrong with the Higgs field argument in this unnaturalness issue?

G-grandmother: This is the whole problem. The argument implies that the Higgs field is the true Daddy field which affects the entire universe. You know, only the true Daddy field (the space-time-sheet) carries the *vacuum energy*. Any other fields also carry energy, but not vacuum energy.

Apple Boy: Come on, everyone knows that the vacuum energy is the result of quantum fluctuations of quantum fields.

G-grandmother: No, the quantum fluctuations of electric field are not vacuum energy. This is a linguistic issue, you know. Vacuum is referring to lacking of matter in *space*. So, vacuum energy is about the energy carried by space-field (space-time-sheet to be exact). If Higgs field carries some energy, it should not be called the *vacuum energy*, unless the Higgs field is the space-time filed.

Apple Boy: Well, besides of not being turned on or off, what is the other reason that the Higgs field cannot be the space-time-sheet?

G-grandmother: The space-time –sheet houses *all* fields (including the gravity field), the same as the Earth field houses all plant fields. If Higgs field does not house all fields, its being on or off does not truly make any difference to the space-time-sheet. If it does house all fields, then it cannot be turned on or off. All those calculations are just games on the paper.

Apple Boy: Okay, let’s put this Higgs field vs vacuum energy issue aside. The point that the vacuum energy is the result of quantum fluctuations is still important, isn’t it?

G-grandmother: Wow, you got a key question again. We know three facts.

a. Quantum principle (fluctuations) is a fact.

b. Vacuum energy is not zero.

c. The above two facts (a and b) are related.

But, what kind of relation are they, the cause/effect or the fundamental/emergent? There are two possibilities.

1. Quantum principle (fundamental) causes the nonzero vacuum energy (emergent).

2. Nonzero vacuum energy (fundamental) causes the quantum principle (emergent).

Apply Boy: Come on, everyone knows that #1 is the answer. But, what is the big deal here?

G-grandmother: This is *the* issue. By selecting #1 as the answer, we are facing the unnaturalness issue. By selecting #2 as the answer, the Nature cannot be unnatural. But, this issue is very deep and cannot be discussed any further here.

This is fantastic – the way you explain things makes the investment of a little work on the part of the reader very rewarding, as I am now feeling quite close to really grokking the whole thing, after years of reading explanations for non-specialists that skirted around the tricky parts of the logical argument, leaving the reader unsatisfied even after effort. But–and I think it’s a big but–I am finding one key thing elusive and I would enormously appreciate an amplification of this one issue. Why would the non-existence of particle physics beyond the Standard Model (apart from gravitation) imply that v_max is as high as possible? Tell me that, and I’ll feel able to explain the whole thing to anyone who cares to listen.

I’m not sure I’ve understood your confusion yet. If I do understand it: The answer is that it was almost by definition of v_max. v_max is the scale at which the Standard Model is no longer a good description of all particle physics. If gravity plus the Standard Model described all of nature, then we know what v_max is… it is that huge number that I wrote in the text. Physically, this is the scale of energy above which precise calculations of scattering experiments require inclusion of gravity effects… where gravitational forces are just as strong as the other forces. Have I answered you?

Thanks for your answer. I think in composing my question I misidentified my misapprehension, because me_[today] can’t understand me_[yesterday], and now my confusion about Figure 5 is different (UPDATE: though 24 hours after beginning this response I think I’ve got it; perhaps you can verify). Here’s my understanding of it: the Higgs field has an average value throughout the universe, which, before we measured it, had a range of possible values. The Standard Model lets us work out, for different potential values that the average value might take (up to a maximum, v_max, beyond which the SM ceases to be consistent with observed reality), the contribution to the energy density of empty space for each of a number of known effects of the Standard Model. These are the blue curves in Figure 5. There might or might not be any additional, as yet unknown, contributions–these are the red curves, and you’ve drawn some fictional ones in the second row. Whether or not there are any red contributions, the sum of blue and red contributions must correspond to what we observe. And what we observe is that the Higgs field has an average value of 246 GeV, and the energy density of empty space is such-and-such.

Now, the blue effects are in principle–and, to a large extent in practice–calculable, you say. Given that we know now from experiment what the average value of the Higgs field is–246 GeV–we should expect to be able to predict (if the SM is all there is) the energy density of empty space by performing the calculations for the blue contributions AT ONE PARTICULAR SLICE OF THEIR CURVE, namely the slice corresponding to 246 GeV on the horizontal axis. And either that calculation gives us the observed value for the energy density of empty space, or it doesn’t–if it doesn’t, then there must be red effects that provide additional contributions. The amazing thing is, whether or not we need to invoke as-yet unknown red effects to yield the observed value, that the observed energy density of empty space is extremely tiny, and there’s no reason that all these unrelated effects–whether or not there end up being any red effects–should end up coming so close to cancelling out, given the comparably large individual contributions both positive and negative. If they hadn’t cancelled out so closely, no universe similar to that we observe could have existed.

Ok, here’s what I did’t get, and now perhaps I do get: once we have observed the average value of 246 GeV, why do we need to consider what happens all the way up to v_max on your charts? Whatever is to the right of 246 GeV (and to the left of it) doesn’t correspond to our universe. So if there are no red effects, then the consequence is just that the blue effects have to combine to give the observed value, and the value of v_max doesn’t come into it at all. The reason we might care about v_max comes into it if we want to answer the question of unlikely it was, with no other assumptions, that the cancellation should have occured: a very high v_max means a very wide range of potential Higgs energies, and at the largest of those energies it would be even more exceedingly improbable that cancellation should occur. So the higher v_max is, the greater the coincidence–if you model our universe as having had a randomly assigned Higgs energy (i.e. considering a multiverse model which leads to anthropic selection bias etc.)

The absence of any new particle physics would mean that the limiting value for the Higgs energy occurs when the top quark gets too heavy for the rest of the Standard Model to work. This is really really really big, and so the absence of any new particle physics at higher energies means the coincidence of cancellation, with such a wide range of potential universes to choose from, is that much more startling. And that’s why you’d expect there to be new particle physics revealed at higher energies than we can currently probe: to mitigate the extraordinariness of this coincidence by giving us some new reason to not have to consider those potential universes with really really really energetic Higgs fields.

If I’m right in all of that–it was writing the final sentence above that felt like a breakthrough in understanding–then there’s still something slippery to the argument in my mind: it seems that the probability of cancellation has been based on a distribution of possible universes that all have the Standard Model or the “Standard Model Plus”, but that are allowed to vary by their different values for the energy of the Higgs field. Why isn’t the choice of this range, and only this range, seen as a little arbitrary? Thanks for taking your readers’ questions so seriously.

“once we have observed the average value of 246 GeV, why do we need to consider what happens all the way up to v_max on your charts?”

You have the logic up-side down. If the theory is correct up to v_max, then we’re ALLOWED to consider what happens all the way up to v_max. And when we do, we discover there has to be very delicate cancellation between what we know and what we don’t know if there’s to be a Higgs field sitting at 246 GeV. Don’t confuse what we know from experiment (the field’s value is 246 GeV) from our efforts to get the right theory of nature (whose equations should

predict246 GeV, or AT LEAST should predict that 246 GeV is not extremely atypical.)“to mitigate the extraordinariness of this coincidence by giving us some new reason to not have to consider those potential universes with really really really energetic Higgs fields.”

Not quite. Mitigating the extraordinariness of the coincidence might not change our ability or reason to consider how the universe would behave with a huge Higgs field value [NOT "energetic" Higgs fields -- a very different thing]. What it would certainly do is make it more obvious why 246 GeV would emerge from our equations, instead of zero or something much larger. Let me explain some of the solutions; then it will become clearer what I mean.

“it seems that the probability of cancellation has been based on a distribution of possible universes that all have the Standard Model or the “Standard Model Plus”, but that are allowed to vary by their different values for the energy of the Higgs field.”

Again: “vary by their different values of the Higgs field”, NOT “different values for the energy of the Higgs field”. Fields have both value (how big is the field at this point) and energy (how much energy is associated with the field being this big.) We’re talking about varying the value up to v_max.

” Why isn’t the choice of this range, and only this range, seen as a little arbitrary? ”

If the theory is valid up to v_max, then that more or less sets the range within which we can do the calculation. (I can see that the fact that I’ve been a little vague about how we actually do these calculations is causing some confusion; the article needs improvement.) It’s not arbitrary; it’s pragmatic. Nevertheless, there *is* some arbitrariness here. But the naturalness problem is so gigantic that other choices don’t generally make it go away. Only if you make an enormously different choice can you make the problem evaporate.

But successfully justifying that choice would represent, in fact, an example of creating a new solution to the naturalness problem.There are only a few such solutions that have been proposed over the past 35 years.Okay. Thanks. One last thing–I wrote: “Why isn’t the choice of this range, and only this range, seen as a little arbitrary? ” And you replied “If the theory is valid up to v_max, then that more or less sets the range within which we can do the calculation.”

But by range, I meant the range of hypothetical universes that we are imagining ours being chosen from, not only the range of values of the Higgs field. If I’ve understood correctly, to quantify the degree of improbability of our universe being just so, you look at how wide a portion of the horizontal axis on Figure 5 results in a universe compatible with existence, and you see that only one in a gazillion potential universes will work. The range of potential universes you consider is precisely those that have the Standard Model or the Standard Model Plus, and that have a Higgs field value ranging from v_min to v_max. But this seems (on the face of it, to one particular layperson at least i.e. me) a little arbitrary, in that your realm of possibilities is unconstrained as to how the Higgs field value varies within its range, but very constrained in other respect, namely that the possible universes are all SM universes.

I can’t even begin to imagine how you could enlarge the calculations so as to encompass every other kind of (non-SM) universes! But wouldn’t extending your realm of possibilities of potential universes necessarily change the a priori likelihood that a randomly chosen one is compatible with existence?

Clearly the precise numerical likelihood that you compute depends on what possibilities you consider, and we have no idea what options nature had available, if any. But you notice that by focusing on the Standard Model itself — not Standard-Model-Plus — I aim to identify an conceptual issue *internal to the Standard Model* which appears to require resolution by something *outside the Standard Model*. This is a way of arguing that the Standard Model and its Higgs field are not likely to be the whole story for particle collisions with energy well above 500 GeV.

Excuse me for butting in, but SM accounts for only 4% of this universe. Wouldn’t this unnaturalness problem disappear when new particles are discovered (2015, CERN) that might describe the dark matter and the dark energy? Universe appears to be unnatural or, opposite to what was expected and suggested by A. Einstein. It’s not beautiful, calm, predictable, but wild, unpredictable, turbulent and much, much bigger. Could this be because universe is open system and not closed, as almost everyone thinks of it. If vacuum wasn’t created by a BB but existed just about forever or, at least before this universe, and was much denser than today, was not switched on but in inert state, why couldn’t particles come out of this field? We’d need something to switch it on. Say it was a deliberate act, planned, etc. If we could build a theory on this hypothesis, and it checked out, wouldn’t we solve this cosmological problem?

In the blue,red and green curves, I suppose you are assuming that there is only one Higgs field and one Higgs particle at the currently known values. Does having more Higgses at higher and higher energies help with the fine tuning problem or make it worse? In other words are more Higges buried in high value of v (max)?

Actually, if there were, say, two Higgs fields, the only thing that would change is that instead of my curves (functions of one variable), I’d have to draw functions of two variables. The argument would be entirely unchanged; adding more Higgs fields does not give any natural cancellations, and the problem would now be just as bad.

Is the un-naturalness problem totally dependent on our assumption that the primary cosmic fundamentals are Fields so that if in the year 2100 it was proved wrong assumption then the U-N problem vanish , or it is itself some kind of fundamental rank problem , in other words , are ANY fundamental building blocks tied to un-naturalness ?

It’s hard to imagine that simply replacement of fields by something else would eliminate the problem, because we know field theory (and its quantum fluctuations) do a remarkably good job of explaining particle physics on distance scales from macroscopic down to 10^(-18) meters, and the naturalness problem arises already at 10^(-18) meters. Just as Einstein’s new theory of gravity didn’t make it necessary to fix all predictions of Newton’s theory (i.e., bridges didn’t fall down just because gravity is more complicated than Newton thought), a new theory of nature isn’t likely to eliminate the naturalness problem — unless, of course, the new theory changes the Standard Model at the energy of 500-1000 GeV (distance of 10^(-18) meters) or so. (Remember that there’s no naturalness problem with the Standard Model if vmax is in ths range.) However, if fields were replaced with something else in the 500-2000 GeV range, we would have expected predictions from quantum field theory for physics at the Large Hadron Collider to begin to fail at the highest energies. Instead, those predictions work very well.

I mean : maybe the un-natural is the field concept despite the correlations between the data and the assumption.

Remember “unnatural” means “non-generic”. What’s a “generic” concept?

I mean by un-natural concept of field that reality is built on other kind of primaries .

I think you are mixing two definitions of natural.

The spectacular cancellation between known and unknown large quantities is what is unnatural. It is not concepts that are unnatural here.

The point is: even if we replace fields with something else, that can only change the UNKNOWN large quantities. The known large quantities will still be there… because we know quantum fluctuations of known fields exist and have large energy. That part isn’t dependent on the field concept.

You are really a true scientist , honest , sincere and wonderful person Matt.

All respects and regards are due to you for your most respectful science.

Thanks

One thing I’m not sure I understand is how the Higgs particle gets mass. It seems logical that changing the Higgs field value increases the energy of empty space and that makes it more difficult to vibrate the field. However wouldn’t the same thing work for any other particle (electron, up quark, etc)? For the other particles the mechanism of getting mass is through interaction with the Higgs field. Not because changing the electron field value would increase the energy of empty space. Is that correct? Because my understanding is that if the Higgs field were zero, the other particles would be massless – so the zero point energy would not help them get their mass.

Or the zero point energy would give them some very small mass even without the Higgs field? And the Higgs boson does not need any other non-zero field to interact with, because for the Higgs it’s enough to use the zero point energy to get its mass?

Very good question. There is a pedagogical flaw here, and you’ve identified it. I have to think about whether I can improve this. Both your old impression *and* what I’ve said are correct, but I agree the relationship between them isn’t well-explained.

For all fields that we know so far (and this may not be true of other fields currently unknown), the mass of the particle is associated with the Higgs field in some way. However, the story for the Higgs particle is slightly different from the others.

About the Fields: The electron field’s average value is zero; this is true of any fermion. The W field’s average value is zero; calculation shows that the minimum energy of empty space arises when the W field is zero on average. Indeed, the Higgs field is the only (known) field for which there’s something complicated to calculate in order to determine whether the energy of empty space prefers it to be zero or not.

About the Particles: This is the same, in a sense, for the Higgs and for everything else. It is true that particles like the electron get their mass by interacting with the Higgs field. But how does that work in detail? An electron is a ripple in the electron field. That means that the electron field, which is normally zero in empty space, is non-zero as the electron goes by. But where the electron field is non-zero, it interacts with the Higgs field, which also isn’t zero even on average; and the interaction of the two increases the energy in that region. Consequently the electron field’s ripple has more energy than it would have in the absence of this interaction, and the excess energy is mass-energy, crudely speaking. This is general. The issue is: by how much does turning on a field like the electron field increase the energy of empty space? The amount by which it increases tells you something about the mass-energy of a ripple in that field.

The crucial thing that’s different about the Higgs field and particle is that the fact that the Higgs field affects so many other fields and their particles means there’s a complex interplay between the Higgs field’s value, the Higgs particle’s mass, and the energy of the quantum fluctuations of the other fields and how they depend on the Higgs field’s value. The description I gave above of how the Higgs particle’s mass comes about is not unique to the Higgs, but its relation to this complex curve, which comes from the quantum fluctuations of other fields, is special.

Thank you very much for the explanation.

Regarding “the Higgs field is the only (known) field for which there’s something complicated to calculate in order to determine whether the energy of empty space prefers it to be zero or not”:

I’ve read that in the theory of the strong interaction (QCD) the vacuum has interesting structure, too. In particular that there are quark and gluon condensates characterizing the vacuum. Wouldn’t these also be due to fields that prefer to be non-zero in empty space?

Good point. I should have said: “elementary field”. Quark and gluon fields do not have any such issue, but composite fields made from a combination of quark and anti-quark fields, and from pairs of gluon fields, have this issue. I will get into this subject when discussing “solutions”.

Hi -I’m somewhat confused. You state Vmax is how far you could push the value of higgs field within the tolerance of the standard model, but that does not seem to imply the real field has to be high, in fact unless I misunderstand it is deemed to be 246 which is low and does not need any unnatural factors – so why do you need to push the foundations of the model that far?

The point here is not to put the cart before the horse. We have *measured* the Higgs field to be 246 GeV. But to show that the theory *predicts* this, we need to show that 246 GeV is actually a minimum of the energy. To prove this we must do a general calculation that doesn’t first assume that the Higgs field is low. Otherwise we’d be in danger of assuming what we were trying to prove.

But don’t worry; if we *had* assumed that the Higgs field’s value was small, but that the Standard Model was valid up to vmax >> 500 GeV, then we would simply have discovered, by calculation, that our assumption was wrong… and that there is indeed no minimum in the low-Higgs-field region at all.

Yes – but even according to your text 246 does not have to be the absolute but just the regional minimum so why strain a good model to breaking point?

Hmm. Not sure I understand you yet. We want to understand whether the good model could be the

completemodel or not. If it is the complete model, then the thought experiment of pushing up the Higgs field to large values should be legitimate. And then we discover that if it *is* the complete model, it actually isn’t that good, because there won’t be a minimum in the energy anywhere near 246 GeV. Which leads us to think it can only a good model up to around 500-1000 GeV, and then we should expect other phenomena to start showing up, or maybe something else weird is going on.Sorry seems Im the one who does not understand but not sure where I go wrong:

There is one higgsfield at 246GeV – correct?

Because its there, there has to be minimum in the energy density at this level – ?

We dont know why but we dont need any unnatural factors to make it so?

If the field were >>500GeV we would need unnatural factors – but it is’nt – why does the standard model have to hold for for hypothetical fields that do not/ or may not exist – as an analogy its like designing a car that can go 500mph but knowing you can never exceed 60mph on the highway

Ah — do not confuse the Higgs field’s actual value with v_max. v_max is as large as we *COULD* take the field and still trust our equations. The issue isn’t whether the Higgs field’s value

is>> 500 GeV ; it’s whether the equationswould correctly describe the worldif the Higgs field’s value were that big.Say it this way: Suppose you know a car can go at 500 mph, but you discover the car is going at 60 mph. Now you want to explain: why is at 60 mph, given that it can go 500 mph? One way to find out is: try running the car (maybe just in your mind) at 200 mph, at 300 mph, at 400 mph, at 20 mph, at 30 mph. Maybe you discover that the engine’s force and friction on the wheels exactly balance at 60 mph: if you try to run the car at 20 mph, it will speed up; but if you try to run it at 300 mph, it will slow down; and right at 60, it will coast.

But now imagine, for example, that you had a car that could go 500 mph, and had an engine that was powerful enough to go that fast in the absence of friction. Wouldn’t you be surprised if it turned out that the engine and friction balanced when the car was going 0.0001 mph? Our problem is vaguely akin to this.

Thanks for your questions. You and several other commenters are finding a number of pedagogical flaws. I’m going to have to do a serious rethink of to reword some of this article.

Thanks for your patience,

Okay in the case of the car model I have to assume there is a hidden direct correlation between the car’s speed and the friction, and I suppose this is what you show in diagram 7, but if that correlation becomes known, would it not become natural?.

From diagram 5 it would however appear you dont need absolute cancelation as long as the curve beyond 500 does not dip lower than the point at 246, so if you have a field that increases exponentially as the first of the blue graphs (would be interesting to know what field that represents) and it were considerably stronger than any other field one could postulate that no more potential lowpoints lie beyond 500+, wouldnt that be a more “natural” assumption?

It isn’t just an issue of whether the correlation is known; the issue is whether it is a pure accident, or whether there’s a reason.

As for the second paragraph’s suggestion: if that were the case, then, if there were no accidental cancellations, then the minimum of the curve would be at zero, and thus the Higgs field’s average value would be zero and the particle’s mass would be large — in short, you’d have a world in class 2. Not class 3.

Going through the comments I think I was asking the same as JonW in a somewhat naiver way and thanks to your explanations to me and also to him I think I now finally get the picture (more or less). When I say correlation I mean with an underlying but unknown reason because random would be unnatural and I guess thats what youre saying too – I suppose somewhat a long shot but if particles are produced in pairs, maybe universes are also and then could we not possibly imagine a corresponding “anti-universe” with all fields reversed – if such two opposing universes were in the process of still separating or possibly colliding then could this lead to an overall field cancellation and appear locally similar to the situation that we experience now?

You could imagine that, but now make equations that actually do it. That’s the hard part. Then, having succeeded, make a prediction based on those equations. That, too, may be difficult.

The maths to do that is beyond me, but I would predict that a mixture of opposing universes would not be very stable, at least not for a sufficiently long time to get to the present stage

When I said that there is no such thing as a vacuum, it was as a question, but in a sense doesn’t a field permeate all of space? At some time matter must have sprung from fields at some point in the creation of the universe? Right? Of course that would be the particles that later became Hydrogen and Helium, etc.. You can speak of virtual particles which spring in and out of existence evidently due to the peaks in waves that fields have, I would think. Anyway I’m looking at from the perspective of how I would visualize an ocean wave peak out and than disappear, yet the ocean still fills all space, though we only see the surface. Please excuse my less than knowledgeable questions.

I should have said no such thing as empty space, not vacuum, sorry.

@Strassler The fact that things are cancelling out and the total energy is nearly flat is another way of saying that there is some unknown symmetry in nature! And what is that symmetry if it is not supersymmetry.

I would like draw you attention to a recent blog by Sean Carrol where he discusses the early results from dark matter searches which are giving hints of dark matter at 5-10 GeV energy levels. He pointed out that if that turns out correct, then there might be roughly equal number of baryons and dark matter particles which might have something to do with baryon number conservation. What kind of symmetry can give rise to equal number of heavier (but not that heavy) dark matter candidates. And other results point to interacting (Exciting) dark matter pointing to presence of dark electromagnetism like forces that do not interact with electric charge.

A symmetry is only one possible explanation. Dynamical effects can cause this also. I will discuss this soon.

Regarding dark matter: again, a symmetry is only one possible explanation. It could also be a dynamical effect.

As for dark forces and dark particles, see: http://arxiv.org/abs/hep-ph/0604261

Matt, you have an error in your text. There is at least one theory which haven’t that naturalness problem.

You haven’t read carefully. There are MANY theories that don’t have a naturalness problem, and I am going to explain the most famous ones soon. I will not explain those that are only believed in by one person, however.

Touché :) I made the decision to take another route (in order to change the paradigm in particle physics), experiments it is. My last option so to speak. But blasting small enough amount of antimatter without high tech lab equipment is going to be *extremely* difficult. So wish me luck! :)

_”….imagine, for example, that you had a car that could go 500 mph, and had an engine that was powerful enough to go that fast in the absence of friction. Wouldn’t you be surprised if it turned out that the engine and friction balanced when the car was going 0.0001 mph? Our problem is vaguely akin to this.

_”It isn’t just an issue of whether the correlation is known; the issue is whether it is a pure accident, or whether there’s a reason.”

Please. excuse my bombastic replies to Larson wherein I included all physicists and launched a seaming attack on Standard Model. I think I know what you are saying, but your article is far too technical for my understanding of physics. I merely wish to understand the principles behind these concepts and can not completely subscribe to SM theory because of the singularity problem and the BB theory. SM works by and large and is proven with tests but it makes the universe appear unnatural because, as you explained to “zbynek” the universe is calibrated to include higher energies in Higgs field when it works perfectly well at the much lower energies. No, there must be a reason for that. Universe must have more fields operating on it. Perhaps another force or body of energy that operates on the space. I know you don’t have time for other people’s ideas on what could be out there but just suppose that universe is not a bubble floating in hyperspace, that it isn’t a brane either but just an empty space encompassing a real source of unimaginable energy. Suppose that energy is seated in the center of our universe and our universe is orbiting this source. Say that we are at a safe distance from the radiation of this source, a goldilocks zone but closer and further distances exist. This could be that margin of possibilities that some physicists see as multiverse potential. Ok, that’s my two pennies worth of contribution to this topic and discussion. I expect physics will have to readapt to new concepts, because the old one is ready to be discussed. Tabues in physics are preventing fine scientist like you from exploring other possibilities. May be with the next generation of scientists, things will begin to move on. No, don’t get me wrong, standard model could be the closest thing to reality of this universe, a real huge step forward, but it must not stagnate. I’m truly mesmerized with you taking so openly to a general audience that has no knowledge or very little knowledge of physics. Thank you for that.

“I can not completely subscribe to SM theory because of the singularity problem and the BB theory”

The Standard Model describes all particles and forces *except* gravity. The singularity issue (which we don’t even know is an issue) has nothing to do with the Standard Model. The Big Bang theory (which is in excellent agreement with measurements — I don’t know why you dislike it, given how fantastically well it works) involves both gravity and the Standard Model. If you don’t like the Big Bang theory, you probably dislike the gravity part of it, and not the Standard Model anyway. As for the Standard Model alone, if you don’t like it, you have some explaining to do: it works for hundreds of measurements. It may not be the whole story — indeed it is unlikely to be the whole story — even for non-gravitational physics. But it works extremely well.

Your impression that physicists are wedded to the conventional wisdom and close-minded is simply wrong. When I go to conferences, there are always many talks that go beyond the conventional wisdom. The particular suggestion that you made (or something similar) has certainly been considered. But the problem is that there are dozens and dozens of such suggestions, many far more radical than what you suggest. Almost all of them will turn out to be wrong; I could fill this blog with all the crazy false ideas that I and my colleagues have had, and it would make this blog impossibly confusing. (Other bloggers approach this issue differently.) I tend to report the mainstream, and especially the best-established part of the mainstream; I think that is the best way to explain what we think we know. And of course, some of what is mainstream now will be discarded someday; that’s certain. But we don’t know which parts will be discarded, or why… so we have no choice but to be patient, work hard, and wait for experimental or theoretical insights to push us in the right direction.

Thanks Matt, for giving it to me straight. I’m at great odds coming against you, for if I ask why can’t SM explain gravity, you’d just say because we haven’t discovered gravitons yet. (I could be wrong in my assumptions). Something is truly a puzzlement here, with the theory of particles. We have various fields. Every particle comes as a ripple of that field. You say, fermions have zero energy fields. To me this means that particle mops up the pre-existing field, if particles are ripples in a field. So, if not out of Higgs field then from a composite field of EM and nuclear (strong and weak) fields. Higgs field gives mass to some particles but not to Higgs boson which is scalar in nature and has no spin (?) How can a particle exist without a spin? Higgs boson has a huge mass but Higgs field is not giving it that mass. Could we say then that Higgs boson doesn’t exist without a collision of hadrons and is a by-product of that collision and that a kinetic energy of that collision produces it.

Singularity being mathematical deduction of gravitational force’ capabilities, yes doesn’t belong in SM because SM cannot explain gravity with the known particles. I get that, I knew that. I just mentioned it anyways. Why can’t I accept SM? Because its based on the philosophy that everything in nature must balance out. So we have particles and antiparticles. This leads to an alternate realities theory, which to me is as unnatural as can be. Why do the things have to balance out? That leads to a conclusion that universe came out of zero energy? If that’s not unnatural, I don’t know what is. Next: Multiverse problem: where did the first universe come from? Big Bang theory: foggy start, gloomy ending; universe is on a ‘self-destruct’ course. Mankind has no future since we cannot leave this universe. Scouting the galaxy by manipulating the physical laws, creating wormholes is a short term solution if achievable. And then escaping out of the incinerating hug of our dying star or, getting wiped out by an asteroid or a comet or, by our own means. I think that universe is a much better place than what theory suggests. Yes, Its full of deadly radiation, but something placed us in a safe place. Now, that would be unnatural if nothing else exists but a physical process of evolution, (creation and destruction of matter), and we just happen to be an accident of this mindlessness. That to me would be highly unnatural. Otherwise, Standard Model doesn’t bother me at all. OH, I might have a wrong view of physicists as a whole, but you are not in that group, so I’m glad I stumbled on your blog.

Wishing you all a positively peaceful, relaxing, rejuvenating weekend!

Correlation with data does mean identity with ontology , plus if we take the agreed upon fact of underdetermination of theories by data can’t we recognize that maybe ontology in its most fundamental primary aspect is something totally different than field activities with virtual particles no matter how many experiments match our purpose tailored data-match designed theories ?

Maybe the deep grand secret of nature is that it is so constructed that the mind can describe it with many mathematical structures , every one of which matches data and able to predict true predictions , that is fantastic interaction between Mind and Nature , it is the interaction that cannot be ignored anymore .

It is also what we see in ontological aspects of scientific theories.

Execuse me Matt. , I am confused ; is nature un-natural as it is or as we see it or as some of us see it or as you see it ?

I mean is un-naturalness relative or absolute or in the category ( maybe , perhaps , it could have been ……….)

I need a clear cut answer , isn’t that science ?

The Standard Model is unnatural as quantum field theory experts see it. That is: the Standard Model is a quantum field theory (and quantum field theory applies also to many other systems in nature, mainly those in solid-state physics.) Examination of the physical systems to which quantum field theory applies suggests that our understanding of how quantum field theory works is excellent. Among all those systems, particle-like ripples are common, and there are some that are Higgs-like. If you look at all the other systems, you never find a light Higgs-like ripple with a mass-energy mc^2 far less than the energy scale at which you find other particles and forces associated with that ripple… or more precisely, the few cases where you do find it are very well-understood, and the principles that apply in those cases don’t apply to the Standard Model. All of this suggests strongly that our understanding of quantum field theory in general, and the naturalness issue in particular, is correct.

Nature, on the other hand, is not yet known to be unnatural, because we don’t know v_max yet. Only if we can show, using the LHC and other experiments, that v_max is much larger than 500 GeV will be able to conclude that nature itself is, in this sense, unnatural.

As you may noticed , I am always looking into the global not the partial , so , is it safe -for resting of the mind – to say as a global absolute fact that the mighty forces of OUR cosmos are balanced in a truly unexpected razor edge equilibrium ?

Please ; yes , no , we don’t know . Thanks.

We cannot make statements yet about our universe — only about the theory known as the Standard Model (plus gravity) which may or may not describe our universe. Remember: we don’t know v_max, and we don’t know yet that it isn’t 500 GeV, with discoveries at the LHC just around the corner.

Matt, where are your articles on the Standard Model (or =partial grand unified theory?) that compare its prediction (by calculation) and experimental data? I heard you saying repeatedly that the SM is spectacularly accurate and that sounds pretty good. but I want to see, check, and calculate its accuracy one(by calculation) to one (by data/experiment/observation) for each (simple) concrete specific physical phenomenon. I also want to know where/how/why the SM breaks down and where(under what conditions(range/scale/applications etc.)) it is robust with specific (simple) examples. For what applications is the SM useful? Is the SM useful for modeling/calculating a behavior of a living cell or a molecular machinery inside a cell? With proper simulations does the SM accurately picture/draw photon – molecule interaction? For example, how a single (or multiple?) photon(s) interact and alter structures of (macro)molecules inside a cell?, or how the molecule emits a photon(s)? Does the SM almost completely accurately describe every single moment of how a photon is absorbed to a molecule during photosynthesis (with where the energy is localized in which fields at every instance)? Could you point me to such simulation models (CG/video simulation etc.)? Or the CG/movie simulations(with real data) by SM which describes/depicts/draws the propagation of a photon in vacuum?

You can find a few hundred things to check if you read

https://twiki.cern.ch/twiki/bin/view/AtlasPublic

https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResults

http://lhcbproject.web.cern.ch/lhcbproject/CDS/cgi-bin/index.php

http://cds.cern.ch/collection/ALEPH%20Papers?ln=en

http://l3.web.cern.ch/l3/paper/publications.html

http://www-zeus.desy.de/zeus_papers/zeus_papers.html

Those are just

someof the results from the last 20 years; there are hundreds more from earlier periods. For example:http://inspirehep.net/search?ln=en&ln=en&p=collaboration+jade&of=hb&action_search=Search&sf=year&so=a&rm=&rg=100&sc=0

There are no known and confirmed large deviations from the Standard Model, and certainly no glaring ones, within any particle physics experiment. That means: agreement with many tens of thousands, of independent measurements (and if you count each data point as a separate measurement, you’d probably be in the millions). By contrast, if you took a theory which is like the Standard Model but is missing one of the three non-gravitational forces or one of the types of particles that we know about, it would be ruled out by hundreds, even thousands, of experiments.

The only things in nature that are known not to fit within the Standard Model plus Einstein’s gravity are:

1) Dark energy (which you can put into Einstein’s gravity by hand, but presumably that’s too crude)

2) Dark matter (we have ideas on what it might be, but nothing in the Standard Model can do the job)

3) Neutrino masses (which require a small amount of adjustment to the theory — this may be a small issue)

Does this help?

Now, there’s a separate part of your question: for what phenomena is it *useful* to use Standard Model equations, rather than some simplified version. For photons interacting with molecules, trying to use the Standard Model in its full glory would be impossibly difficult. So what do you do?

1) You choose what you want to study (molecules)

2) You take the Standard Model and derive simpler versions of the equations that apply for the study of molecules — giving up details in return for simplicity

3) You then use those simpler equations to study complex molecules and their interaction with photons.

So the trick, and the subtlety, is step 2. You might fail to come up with sufficiently simple equations, so you can’t carry out step 3; or you might oversimplify and then step 3 won’t work. Of course, that’s not a failure of the Standard Model; it’s a failure on scientists’ part to think of a way to apply it to a complex problem.

Needless to say, if you want to check the Standard Model carefully, you check it on simpler problems first! And there it has an incredible track record. And yes, you can derive atomic physics, and the interaction of atoms and light, from the Standard Model, although if the atom is complex you have to rely on computers; and you can go on from there to derive computer techniques for studying complicated molecules, etc.

Matt, yes, what you say sounds consistent (but just listing papers would not be so helpful). So, I want to see this part “Needless to say, if you want to check the Standard Model carefully, you check it on simpler problems first!” Could you demonstrate a few simple(er, st) cases (readable/understandable for freshman level physics) for 1)SM works well and 2)SM fails? If you do not have time just one good demonstration (comparing “calculation” and “experimental data”) for each (1 and 2) are fine. How about, how the SM draws/simulates a propagation of a photon in vacuum? I hope it is simple enough so that it is easy to generate such animated CG movies in 3D coordinate with time change. I want to see how a photon radiates/moves/propagates (how photon field behave) in 3D CG as time ticks with all the known field values visibly fluctuating (perhaps with vector and color density representations etc.) in real time. Or, 3D video representations(=accurate simulation by SM) of quantum fluctuation in vacuum itself would be nice (if possible). Or, how electron field and photon field (with all the known calculable/observable parameters like field direction, energy/wave localization/shape, spin direction of the photon and the electron etc.) behave when a photon is emitted (or absorbed to) from an electron in 3D CG movie with time.

Not only that , the SM does not specify any constant , does not specify any coupling strength , nor any reason for three generations of particles , nor any hint for any system of force values ……as you said before , it is only an effective theory meaning a tool for us to calculate interactions effects , no more

Now after the collapse of hope to produce the above mentioned unknowns via the M-theory where 10 to the power 500 of ensembles are possible and where all hope to find a theory specifying all cosmic parameters instead of inputting them by hand vanished , even discovering new forces and particles via any rout cannot solve that fundamental problem of finding the parameters generating mechanism , and as such any hope of solving the hierarchy problem may be rendered as a mirage.

Not so. There is no reason for this conclusion. What happened to the M theory dream — which may have nothing to do with the real world — has no bearing on whether the hierarchy problem has a solution.

No Matt. , it is so , without a theory to specify the parameters of nature we can never explain the relation between plank,s mass and W , Z masses , or the ratio of gravity force to other forces …..that is the hierarchy problem as stated by you , the death of M-theory dream killed any hope to solve the problem…….so it is so Matt. ……….it is so my friend.

In addition it is logically impossible to construct a mathematical system in which the constants of the SM are the variables which solving the system we get their values since then a new set of meta- constants are required to solve the equations which in turn need hyper- system leading to infinite regression.

Matt, I see there is a puzzle if you hold the following assumption to be true: “there’s no obvious reason to expect these unknown effects in red are in any way connected with the known contributions in blue.”

But isn´t the cancellation of the different terms, the blue and red ones, some kind of strong hint that there is such a connection? The reason might not be obvious, but doesn´t it look likely (or some might say, even “obvious”) that such a reason must (or could) exist?

Someone brought up an analogy, like: Imagine some thousand pieces of metal, all of different shapes. But when you put them all together, you get a car instead of just a pile of metal. Isn´t that strange? Well, only if you assume the pieces were created independently of each other.

Apart from my question, many thanks for your efforts to explain us this matter.

Markus, I had the same reaction early on in learning about these matters (in which I am still a novice, and a layman). I believe one response often given is that it is not obvious such a reason (for cancellation between opposing terms) must exist, because there is an alternative: a multiverse, where each nucleating bubble has its own physics, and anthropic selection effects lead to the values in our unverse being constrained to come so close to cancelling, with no further explanation necessary or even possible for things taking on the values they do in our observed reality.

This is only half-serious, but I think the analogous reasoning in your example of a car would be: let’s say you are a member of a species that can only exist inside a car. And you live in a universe where thousands of pieces of metal, of random different shapes and sizes, are created from nothing, thrown together randomly and spewed out into space, over and over and over again without end. You find yourself in a car, wondering just how this car got put together and what caused its pieces to fit together just so. It turns out there’s no good reason that would satisfy you, save that you couldn’t have been there in the first place to ask the question in the Vast majority of assemblages–the fact of your existence selects from among these assemblages only those, a Vanishing minority, that allow you to be there.

I sure hope it turns out there’s more of a reason that this though!

1- Our universe is impossible to exist without un-naturalness or hierarchy

2-Our existence is not the cause of our universe

3-Our universe is not the cause of our existence

4- There are no laws , principles , rules….etc that dictate a similar scale to the energies and forces of nature

Then where is the un-naturalness problem? Is it in the mind of the beholder ?

Ash, please read this article…

http://io9.com/did-the-higgs-boson-discovery-reveal-that-the-universe-512856167

All scales of energies and forces in the universe does not follow our expectations of their naturality but are chosen to satisfy the attributes of a viable universe ……….. So where is hierarchy problem then ?

Allow me Matt. To say that the origin of naturalness problem is naturalism and the physicists expectations that the universe should follow their criteria , but for the cosmos itself it feels no problem whatsoever having weak scale 16 orders of magnitude less that plank،s scale ………….so follow the universe not the expectation .

Pingback: Un-Naturalness or A Miracle? | The Way

Professor Strassler : In my stand based on many studies and comparisons , the un-naturalness aspect and the hierarchy problems are in reality features of the supreme harmony of our universe where the problemness is in our concepts not in reality.

I assume saying this that you welcome those who agree with you and those who are not convinced , not with the scientific part but with the philosophical part of what you say.

Thanks

What is the explanation of the most (un-natural) aspect of space , that is , existing of all quantum fields in a state of complete interconnection occupying same space without resulting in a global universal shortcircuit of all possible interactions simultaneously rendering the universe in a state of ultimate chaos ?

Pingback: A Quantum Gravity and Cosmology Conference | Of Particular Significance

Sincere advice : please read what any unbiased mind would reach in the pingback link @ Un-Nauralness or miracle , I do respect the writer very much for saying what many refuse to see…….thanks Dr .D

Pingback: Did the LHC Just Rule Out String Theory?! | Of Particular Significance

Pingback: Can Nature be unnatural? | The Great Vindications

“[Note: Some of you may have read that these calculations of the energy of empty space give infinite results. This is true and yet irrelevant; it is a technicality, true only if you assume vmax is infinitely large --- which it patently is not. I have found that many people, non-scientists and scientists alike, believe (thanks to books by non-experts and by the previous generations of experts -- even Feynman himself), that these infinities are important and relevant to the discussion of naturalness. This is false. We'll return to this widespread misunderstanding, which involves mistaking mathematical technicalities for physically important effects, at the end of this section.]”

I get the impression that here (and in the section it refers to) that you’re going for a slightly different audience than the rest of the piece, namely the grad-student-just-out-of-QFT perspective. If that is what you’re going for, I think the section doesn’t really address the principal source of confusion. You point out that finite theories still have hierarchies, but I think the greater source of confusion is why infinite theories are pathological at all. When students are first introduced to renormalization (especially via stuff from the previous generation of experts) it’s often stated that the infinities are actual infinities, which are then simply included in the bare values of the relevant constants. Essentially, this is the “why not just use dimensional regularization?” confusion, which often makes people new to the subject fail to understand why divergences are a problem in the first place. I don’t know if this would be too far from your primary audience, but you might want to briefly address this particular source of confusion.

Did you still feel that way regarding the second attempt to address this issue at the end of the article?

That’s the section I was referring to, actually. You mostly emphasize that the issue of infinities is a technical one, and that finite theories also have naturalness problems. That shows the issue of infinities and the issue of naturalness are distinct, but if someone started out the section wondering why we can’t just set vmax to infinity/use dim reg, that wouldn’t dissuade them.

I suppose in a way, the counter to that is earlier in the article, in that even if you took infinite vmax seriously you’d just need infinite fine-tuning and thus would be infinitely unnatural. While the structure buries this a little since you dismiss infinite vmax early on, your current structure works better for the majority of readers. So on reflection I don’t think there’s anything you need to add, besides maybe a quick note for those who might take vmax=infinity seriously.

Yes, in the end, if a young expert-in-training doesn’t understand this point even after what I’ve written, he or she is going to have to go through the exercise: take a theory that has even the littlest bit of physics beyond the Standard Model, even just one heavy fermion with a large mass M and a small Higgs boson coupling y; use dim-reg or anything else to get rid of the infinities; and look at how the Higgs mass-squared depends on M. That fermion comes in and blows everything to pieces.

Thanks for the fine article.

I disagree on one minor point. In mentioning this fine-tuning of the Higgs field or Higgs boson to one part in

1,000,000,000,000,000,000,000,000,000,000 (to use your number), you say, “Such extreme “fine-tuning” of the properties of a physical system has no precedent in science.”

But no, there seem to be at least two other similar cases of equally amazing “fine-tuning” :

(1) Fine-tuning of the cosmological constant (vacuum energy density) to one part in 10 to the 120th power (quantum field theory gives us an expected value 10 to the 120th power higher than the highest value compatible with supernova observations).

(2) Fine-tuning of the proton charge and electron charge, which match to 1 part in 10 to the 20th power, something unexplained by the standard model.

So this Higgs field fine tuning seems to be not at all a unique case in nature.

Regarding (1) — I do discuss this, extensively, elsewhere. But the problem with the cosmological constant is that it too is unexplained. We do not have a proof, as yet, that its small value is due to fine-tuning, so we can’t give it as an example of fine-tuning, only one of

possiblefine-tuning. Still, I perhaps should have clarified the wording.Regarding (2) — interesting point, but I don’t think this is fine-tuning either. In many extensions of the Standard Model, anomaly cancellations fix this ratio to be exactly 1. If there were only one generation of fermions, the anomalies would fix the ratio precisely. Charges can’t vary; they don’t get quantum corrections. So if a symmetry or geometrical relationship (like anomaly cancellation) fixes the ratio to be 1, it will be forever 1, no matter what happens. This is not true for the Higgs particle’s mass, which depends, for instance, on the values of all scalar fields in the universe and on every parameter in the field theory.

Pingback: Quantum Field Theory, String Theory, and Predictions (Part 7) | Of Particular Significance

Pingback: Elegance, Not So Mysterious | 4 gravitons and a grad student

Pingback: What’s the Status of the LHC Search for Supersymmetry? | Of Particular Significance

Pingback: Visiting the University of Maryland | Of Particular Significance

Pingback: A 100 TeV Proton-Proton Collider? | Of Particular Significance

Pingback: What if the Large Hadron Collider Finds Nothing Else? | Of Particular Significance

I don’t know whether it’s just me or if perhaps everyone else encountering problems with

your website. It seems like some of the text on your content

are running off the screen. Can someone else please comment and let me know if this is happening to

them as well? This might be a problem with my web browser because

I’ve had this happen previously. Many thanks

We manufacture and export physics lab equipment / instruments for school, college and teaching laboratory since 1954. We are based in Ambala.clic more

Thanks for every other wonderful post. Where else could anybody get that kind of information in such an ideal approach of

writing? I have a presentation next week, and

I am at the search for such information.

Dear Sir,

I am a novice. I have a basic question.

From what i have understood:

Quantum fields were present in some empty space. Then quantum fluctuations occurred and created energy. Then E=mc^2 starts to function. Inflation occurs. And the rest is history.

But how empty space came into existence at first place?

And how quantum fields came into existence at first place?

Regards.