Category Archives: The Scientific Process

At a Workshop on Hidden Particles at the LHC

Cutting edge particle physics today:

I’ve been spending the week at an inspiring and thought-provoking scientific workshop. (Well, “at” means “via Zoom”, which has been fun since I’m in the US and the workshop is in Zurich; I’ve been up every morning this week before the birds.) The workshop brings together a terrific array of particle theorists and Large Hadron Collider [LHC] experimenters from the ATLAS and CMS experiments, and is aimed at “Semi-Visible Jets”, a phenomenon that could reveal so-far-undiscovered types of particles in a context where they could easily be hiding. [Earlier this week I described why its so easy for new particles to hide from us; the Higgs boson itself hid for almost 25 years.]

After a great set of kick-off talks, including a brand new result on the subject from ATLAS (here’s an earlier one from CMS) we moved into the presentation and discussion stage, and I’ve been learning a lot. The challenges of the subject are truly daunting, not only because the range of possible semi-visible jets is huge, but also because the scientific expertise that has to be gathered in order to design searches for semi-visible jets is exceptionally wide, and often lies at or beyond the cutting edge of research.

Continue reading

Celebrating the 34th Birthday of the Higgs Boson!

Ten years ago today, the discovery of the type of particle known as the “Higgs Boson” was announced. [What is this particle and why was its discovery important? Here’s the most recent Higgs FAQ, slightly updated, and a literary article aimed at all audiences high-school and up, which has been widely read.]

But the particle was first produced by human beings in 1988 or 1989, as long as 34 years ago! Why did it take physicists until 2012 to discover that it exists? That’s a big question with big implications.

Continue reading

The Size of an Atom: How Scientists First Guessed It’s About Quantum Physics

Atoms are all about a tenth of a billionth of a meter wide (give or take a factor of 2). What determines an atom’s size? This was on the minds of scientists at the turn of the 20th century. The particle called the “electron” had been discovered, but the rest of an atom was a mystery. Today we’ll look at how scientists realized that quantum physics, an idea which was still very new, plays a central role. (They did this using one of their favorite strategies: “dimensional analysis”, which I described in a recent post.)

Since atoms are electrically neutral, the small and negatively charged electrons in an atom had to be accompanied by something with the same amount of positive charge — what we now call “the nucleus”. Among many imagined visions for what atoms might be like was the 1904 model of J.J. Thompson, in which he imagined the electrons are embedded within a positively-charged sphere the size of the whole atom.

But Thompson’s former student Ernest Rutherford gradually disproved this model in 1909-1911, through experiments that showed the nucleus is tens of thousands of times smaller (in radius) than an atom, despite having most of the atom’s mass.

Once you know that electrons and atomic nuclei are both tiny, there’s an obvious question: why is an atom so much larger than either one? Here’s the logical problem”

  • Negatively charged particles attract positively charged ones. If the nucleus is smaller than the atom, why don’t the electrons find themselves pulled inward, thus shrinking the atom down to the size of that nucleus?
  • Well, the Sun and planets are tiny compared to the solar system as a whole, and gravity is an attractive force. Why aren’t the planets pulled into the Sun? It’s because they’re moving, in orbit. So perhaps the electrons are in orbit around the nucleus, much as planets orbit a star?
  • This analogy doesn’t work. Unlike planets, electrons orbiting a nucleus would be expected to emit ample electromagnetic waves (i.e. light, both visible and invisible), and thereby lose so much energy that they’d spiral into the nucleus in a fraction of a second.

(These statements about the radiated waves from planets and electrons can be understood with very little work, using — you guessed it — dimensional analysis! Maybe I’ll show you that in the comments if I have time.)

So there’s a fundamental problem here.

  • The tiny nucleus, with most of the atom’s mass, must be sitting in the middle of the atom.
  • If the tiny electrons aren’t moving around, they’ll just fall straight into the nucleus.
  • If they are moving around, they’ll radiate light and quickly spiral into the nucleus.

Either way, this would lead us to expect

  • Rnucleus = # Ratom

where # is not too, too far from 1. (This is the most naive of all dimensional analysis arguments: two radii in the same physical system shouldn’t be that different.) This is in contradiction to experiment, which tells us that # is about 1/100,000! So it seems dimensional analysis has failed.

Or is it we who have failed? Are we missing something, which, once included, will restore our confidence in dimensional analysis?

We are missing quantum physics, and in particular Planck’s constant h. When we include h into our dimensional analysis, a new possible size appears in our equations, and this sets the size of an atom. Details below.

Continue reading

Black Holes, Mercury, and Einstein: The Role of Dimensional Analysis

In last week’s posts we looked at basic astronomy and Einstein’s famous E=mc2 through the lens of the secret weapon of theoretical physicists, “dimensional analysis”, which imposes a simple consistency check on any known or proposed physics equation.  For instance, E=mc2 (with E being some kind of energy, m some kind of mass, and c the cosmic speed limit [also the speed of light]) passes this consistency condition.

But what about E=mc or E=mc4 or E=m2c3 ? These equations are obviously impossible! Energy has dimensions of mass * length2 / time2. If an equation sets energy equal to something, that something has to have the same dimensions as energy. That rules out m2c3, which has dimensions of mass2 * length3 / time3. In fact it rules out anything other than E = # mc2 (where # represents an ordinary number, which is not necessarily 1). All other relations fail to be consistent.

That’s why physicists were thinking about equations like E = # mc2 even before Einstein was born. 

The same kind of reasoning can teach us (as it did Einstein) about his theory of gravity, “general relativity”, and one of its children, black holes.  But again, Einstein’s era wasn’t first to ask the question.   It goes back to the late 18th century. And why not? It’s just a matter of dimensional analysis.

Continue reading

E = m c-Squared: The Simple Dimensions of a Discovery

In my last post I introduced you to dimensional analysis, an essential trick for theoretical physicists, and showed you how you could address and sometimes solve interesting and important problems with it while hardly doing any work. Today we’ll look at it differently, to see its historical role in Einstein’s relativity.

Continue reading

Dimensional Analysis: A Secret Weapon in Physics

It’s not widely appreciated how often physicists can guess the answer to a problem before they even start calculating. By combining a basic consistency requirement with scientific reasoning, they can often use a heuristic approach to solving problems that allows them to derive most of a formula without doing any work at all. This week I want to introduce this to you, and show you some of its power.

The trick, called “dimensional analysis” or “unit analysis” or “dimensional reasoning”, involves requiring consistency among units, sometimes called “dimensions.” For instance, the distance from the Earth to the Sun is, obviously, a length. We can state the length in kilometers, or in miles, or in inches; each is a unit of length. But for today’s purposes, it’s irrelevant which one we use. What’s important is this: the Earth-Sun distance has to be expressed in some unit of length, because, well, it’s a length! Or in physics-speak, it has the “dimensions of length.”

For any equation in physics of the form X = Y, the two sides of the equation have to be consistent with one another. If X has dimensions of length, then Y must also have dimensions of length. If X has dimensions of mass, then Y must also. Just as you can’t meaningfully say “I weigh twelve meters” or “I am seventy kilograms old”, physics equations have to make sense, relating weights to weights, or lengths to lengths, or energies to energies. If you see an equation X=Y where X is in meters and Y is in Joules (a measure of energy), then you know there’s a typo or a conceptual mistake in the equation.

In fact, looking for this type of inconsistency is a powerful tool, used by students and professionals alike, in checking calculations for errors. I use it both in my own research and when trying to figure out, when grading, where a student went wrong.

That’s nice, but why is it useful beyond checking for mistakes?

Sometimes, when you have a problem to solve involving a few physical quantities, there might be only one consistent equation relating them — only one way to set an X equal to a Y. And you can guess that equation without doing any work.

Well, that’s pretty abstract; let’s see how it works in a couple of examples.

Continue reading

A Big Think Made of Straw: Bad Arguments Against Future Colliders

Here’s a tip.  If you read an argument either for or against a successor to the Large Hadron Collider (LHC) in which the words “string theory” or “string theorists” form a central part of the argument, then you can conclude that the author (a) doesn’t understand the science of particle physics, and (b) has an absurd caricature in mind concerning the community of high energy physicists.  String theory and string theorists have nothing to do with whether such a collider should or should not be built.

Such an article has appeared on Big Think. It’s written by a certain Thomas Hartsfield.  My impression, from his writing and from what I can find online, is that most of what he knows about particle physics comes from reading people like Ethan Siegel and Sabine Hossenfelder. I think Dr. Hartsfield would have done better to leave the argument to them. 

An Army Made of Straw

Dr. Hartsfield’s article sets up one straw person after another. 

  • The “100 billion” cost is just the first.  (No one is going to propose, much less build, a machine that costs 100 billion in today’s dollars.)  
  • It refers to “string theorists” as though they form the core of high-energy theoretical physics; you’d think that everyone who does theoretical particle physics is a slavish, mindless believer in the string theory god and its demigod assistant, supersymmetry.  (Many theoretical particle physicists don’t work on either one, and very few ever do string theory. Among those who do some supersymmetry research, it’s often just one in a wide variety of topics that they study. Supersymmetry zealots do exist, but they aren’t as central to the field as some would like you to believe.)
  • It makes loud but tired claims, such as “A giant particle collider cannot truly test supersymmetry, which can evolve to fit nearly anything.”  (Is this supposed to be shocking? It’s obvious to any expert. The same is true of dark matter, the origin of neutrino masses, and a whole host of other topics. Its not unusual for an idea to come with a parameter which can be made extremely small. Such an idea can be discovered, or made obsolete by other discoveries, but excluding it may take centuries. In fact this is pretty typical; so deal with it!)
  • “$100 billion could fund (quite literally) 100,000 smaller physics experiments.”  (Aside from the fact that this plays sleight-of-hand, mixing future dollars with present dollars, the argument is crude. When the Superconducting Supercollider was cancelled, did the money that was saved flow into thousands of physics experiments, or other scientific experiments?  No.  Congress sent it all over the place.)  
  • And then it concludes with my favorite, a true laugher: “The only good argument for the [machine] might be employment for smart people. And for string theorists.”  (Honestly, employment for string theorists!?!  What bu… rubbish. It might have been a good idea to do some research into how funding actually works in the field, before saying something so patently silly.)

Meanwhile, the article never once mentions the particle physics experimentalists and accelerator physicists.  Remember them?  The ones who actually build and run these machines, and actually discover things?  The ones without whom the whole enterprise is all just math?

Although they mostly don’t appear in the article, there are strong arguments both for and against building such a machine; see below.  Keep in mind, though, that any decision is still years off, and we may have quite a different perspective by the time we get to that point, depending on whether discoveries are made at the LHC or at other experimental facilities.  No one actually needs to be making this decision at the moment, so I’m not sure why Dr. Hartsfield feels it’s so crucial to take an indefensible position now.

Continue reading

Long Live LLPs!

Particle physics news today...

I’ve been spending my mornings this week at the 11th Long-Lived Particle Workshop, a Zoom-based gathering of experts on the subject.  A “long-lived particle” (LLP), in this context, is either

  • a detectable particle that might exist forever, or
  • a particle that, after traveling a macroscopic, measurable distance — something between 0.1 millimeters and 100 meters — decays to detectable particles

Many Standard Model particles are in these classes (e.g. electrons and protons in the first category, charged pions and bottom quarks in the second).

Typical distances traveled by some of the elementary particles and some of the hadrons in the Standard Model; any above 10-4 on the vertical axis count as long-lived particles. Credit: Prof. Brian Shuve

But the focus of the workshop, naturally, is on looking for new ones… especially ones that can be created at current and future particle accelerators like the Large Hadron Collider (LHC).

Back in the late 1990s, when many theorists were thinking about these issues carefully, the designs of the LHC’s detectors — specifically ATLAS, CMS and LHCb — were already mostly set. These detectors can certainly observe LLPs, but many design choices in both hardware and software initially made searching for signs of LLPs very challenging. In particular, the trigger systems and the techniques used to interpret and store the data were significant obstructions, and those of us interested in the subject had to constantly deal with awkward work-arounds. (Here’s an example of one of the challenges... an older article, so it leaves out many recent developments, but the ideas are still relevant.)

Additionally, this type of physics was widely seen as exotic and unmotivated at the beginning of the LHC run, so only a small handful of specialists focused on these phenomena in the first few years (2010-2014ish).  As a result, searches for LLPs were woefully limited at first, and the possibility of missing a new phenomenon remained high.

More recently, though, this has changed. Perhaps this is because of an increased appreciation that LLPs are a common prediction in theories of dark matter (as well as other contexts).  The number of new searches, new techniques, and entirely new proposed experiments has ballooned, as has the number of people participating. Many of the LLP-related problems with the LHC detectors have been solved or mitigated. This makes this year’s workshop, in my opinion, the most exciting one so far.  All sorts of possibilities that aficionados could only dream of fifteen years ago are becoming a reality. I’ll try to find time to explore just a few of them in future posts.

  But before we get to that, there’s an interesting excess in one of the latest measurements… more on that next time.

Just a few of the unusual signatures that can arise from long-lived particles; (Credit: Prof. Heather Russell)