The Nagoya (Japan) conference in celebration of the Inauguration of the Kobayashi-Maskawa Institute has come to a close this morning. There was a pleasant little ceremony yesterday in which Kobayashi and Maskawa took up shovels to place dirt around a newly-planted tiny apple tree outside the institute — an apple tree descended directly from “Newton’s apple tree” at Trinity College, Cambridge (you know the one, the tree whose apple is said to have inspired Newton’s theory of gravity — as though he’d never seen a dropped fork before.)
Meanwhile, back indoors there were numerous talks on a wide variety of research topics. Several of these addressed Japan’s broad experimental particle physics program, which covers neutrinos, bottom quarks, dark matter, cosmic rays, and the development of new experimental devices. Here are a few tidbits I heard about yesterday.
First, the one you all want to know: there was some very good news from the OPERA experiment (the one with the speedy neutrinos,) in which Nagoya is a participant. A key problem with the experimental method used in that measurement (one that I and many others expressed concerns about immediately) is that the pulses of neutrinos that were sent from CERN to OPERA were 10,000 nanoseconds long, while the effect observed by OPERA involved a shift of only 60 nanoseconds; the measurement therefore required precise knowledge of the neutrino pulse shape, but this had to be inferred from the shape of the pulse of protons that leads to the pulse of neutrinos. (Recall how you make a neutrino beam.) There have been widespread concerns that a very small error in that inference could potentially cause a fake shift. So the obvious thing to do instead is to have CERN send a series of short pulses — a couple of nanoseconds long, with big gaps between them. It’s like sending a series of loud and isolated clicks instead of a long blast on a horn; in the latter case you have to figure out exactly when the horn starts and stops, but in the former you just hear each click and then it’s already over. In other words, with the short pulses you don’t need to know the pulse shape, just the pulse time. And you also don’t need to measure thousands of neutrinos in order to reproduce the pulse shape, getting the leading and trailing edges just right; you just need a small number — maybe even as few as 10 or so — to check the timing of just those few pulses for which a neutrino makes a splash in OPERA (recall how you detect neutrinos). OPERA didn’t want to do this because it comes at the cost of a large reduction in the sheer number of neutrinos, and this affects OPERA’s main research program (which involves neutrino oscillations). But apparently the concerns raised by the community have been strong enough to prompt OPERA to request that the CERN neutrino beam operators (remember OPERA is not part of CERN, despite press reports to the contrary) send them short pulses. This process has already begun, as of last week, and according to the speaker, Nagoya’s own Professor Mitsuhiro Nakamura, it will be a matter of only a few weeks before OPERA will have enough neutrinos to make this important cross check. So this is very good news.
Meanwhile, I personally am still quite confused about what the Minos experiment [which measures similar neutrinos, at only slightly lower energies, traveling from Fermilab (near Chicago) to a mine in Minnesota — just about the same distance as from CERN to OPERA] can and can’t do to check the OPERA measurement. I have not run into a Minos expert and have heard conflicting information. So please set me straight if what I now say is wrong. What I was told yesterday is that Minos does not need to take any new data in order to check the OPERA measurement; the data is fine. All that is needed is to calibrate the clocks, which can be done in relatively short time: months, not years. So that too would be good news… though I did not hear what the expected level of precision would be. [UPDATE — see a statement and a link from a more knowledgeable person in the comments below.]
There were many other talks, and here are just a few I enjoyed hearing about.
- There was a very nice talk that included a description of the Super-KEK B-factory (a machine for making bottom quark/antiquark pairs in a controlled environment, at a rate 100 times higher than its predecessor machines.). [Why make so many bottom quarks? The decays of hadrons that contain bottom quarks are a well-known opportunity for high-precision tests of the equations that describe the known particles and forces, tests that are often complementary to what can be done at a very high-energy machine such as the Large Hadron Collider.]
- Nagoya University, which has a long and distinguished history in experimental high-energy physics, is involved in the development of innovative very high-precision devices for detecting particle tracks. These have several potential important applications to particle experiments. A previous generation of these devices was used in the OPERA experiment.
- There was a presentation of this summer’s result from the T2K neutrino oscillation experiment (which sends neutrinos from one Japanese laboratory to another — Tokai to Kamiokande [hence “T2K”]). This very important result, which still has rather low statistical significance and therefore is potentially subject to considerable change, suggests that the oscillation of muon neutrinos into electron neutrinos might be just below what was excluded by previous experiments. If this is true, it will not only be important in and of itself, it will also mean that other interesting neutrino-oscillation measurements will be easier than feared. (By the way — because they use rather low-energy neutrinos and because of certain timing uncertainties at Kamiokande, it appears they are unlikely to be competitive in checking the OPERA result on neutrino speeds.)
- A very nice talk mainly focused on the Fermi/LAT satellite’s results covered many interesting topics. One that caught my eye included a very interesting limit (i.e. no signal was observed) on collisions of dark-matter particles (in which they are converted to known particles, which are then observable.) Specifically, none were seen occurring in dwarf galaxies near the Milky Way, a good place to look because backgrounds from astrophysical sources are small in dwarf galaxies. And another result that was quite striking put a very powerful limit on the possibility that high-energy photons travel at a different speed from lower-energy photons — confirming to a new level of precision that the speed of light does not vary with the energy of the photons that make up the light.
- There were interesting presentations on “lattice gauge theory” (computer simulations of the physical behavior of forces like the strong nuclear force, involving particles such as quarks, antiquarks and gluons) as applied to hypothetical worlds in which the number of types of lightweight quarks (lighter than the proton) is larger than we have in nature. Such studies might be relevant for understanding the Higgs field itself. (The buzzword here is a speculation for the origin of the Higgs field called “technicolor”. ) Personally I find this very interesting, as I’ve been part of a community of theorists who for well over a decade have been urging lattice gauge theory experts to do these studies. Computer power seems to be reaching the point where useful results on the tougher cases are possible.
- One of the world’s experts on technicolor (Professor Elizabeth Simmons) talked about a [relatively!] simple version of technicolor, called topcolor-assisted-technicolor, and presented evidence that current LHC data (from the search for the Higgs) already essentially excludes this possibility. This means a more complex version of this class of models (such as top-seesaw-assisted technicolor) is needed.
82 Responses
What a information of un-ambiguity and preserveness of precious experience regarding unpredicted feelings.
It’s amazing in favor of me to have a site, which is useful for my know-how.
thanks admin
What’s up it’s me, I am also visiting this site on a regular basis, this web
site is actually pleasant and the viewers are truly sharing
nice thoughts.
I very simply argue that any distance estimate used to calulate the speed of neutrinos passing through the Earth, for example, cannot be confirmed. As a result, neutrinos paths through the Earth’s locally variable gravitational fields are not determinable.
Since the path and distance actually traversed by neutrinos is not precisely determinable, no conclusive result can be obtained.
In my receding comment I also refer to Wolfgang Kundt argument that the OPERA experiment “is the first experiment to test Einstein’s
theory for the (weak) gravity field of Earth, with the result that the neutrinos
propagated (just) luminally.”
In combination, Kundt argues that the 61 ns discrepancy is an effect of general relativity for precisely timed measurements of gravitational effects, while I simply dismiss any such attempts to precisely determine the distance traversed by neutrinos over such relatively small distances through complex gravitational fields as invalid.
IMO, If we were to set up a neutrino detecter facitiy on the moon (accounting for a much larger dsipersion angle) perhaps we could obtain reliably conclusive timing results for detected neutrinos, since the precise distance to the moon can be experimentally confirmed through laser reflection.
To clarify, eliminating typos, the last paragraph should read:
IMO, If we were to set up a neutrino detector facility on the moon (accounting for a much larger dispersion angle) perhaps we could obtain reliably conclusive timing results for detected neutrinos, since the precise distance to the moon can be experimentally confirmed through laser reflection. Repeatable results should allow calibration of any gravitational effects.
May be I should elaborate. Perhaps the discrepancy is as a result of a directional motion of the earth that to date we have been unaware of. Assuming the experiment has allowed for Earth’s rotational speed, orbital speed, solar system rotation and galactic speed then by performing the experiment repeatedly with all this factors accounted for it would be possible to calculate this new speed and directional movement of the Earth.
Assume the speed of light is constant. The time taken is accurate then the distance travelled is incorrect. Is the discrepancy observed actually the speed of our solar system or our galaxy through space?
Thanks for responses.
Last Q:
Is distance calculated at the moment the experiment is conducted? Given extreme diurnal temp fluctuations could distance change with “earth expansion/contraction”?
The experimenters measure distance continuously and easily detected the shift of more than 10 centimeters from the Aquila earthquake. It’s in their paper; here’s a copy from another independent website: http://pics.livejournal.com/msevior/pic/00025qhy/s640x480
Not to be argumentative, but the distance determined by GPS & standard geodesy routines were used to determine a distance between two locations, NOT the path taken by neutrinos in flight! The actual distance traversed by any neutrino is indeterminable!
Unlike laser beams following an optical fiber, neutrinos are free to follow the shortest path between two point, but they do interact gravitationally. As I understand, gravitational influences can vary at small scales within the Earth, depending on the path previously taken. Unless the geodesy routines used determine distance in the same maner that neutrinos select their flight path the incorrect distance is being used to evaluate speed.
As a layperson I cannot assess the potentially variable uncontrolled factors dynamically affecting the paths taken by neutrinos, but some may be addressed in the report: Wolfgang Kundt, (2011). “Speed of the CERN Neutrinos released on 22.9.2011 – Was stated superluminality due to neglecting General Relativity?”, http://arxiv.org/abs/1111.3888v1
Forgive naiveté and going bak2basics however if
Speed = Distance/Time
The question begs whether as much “subatomic” scrutiny is being given to calculating D as is being given to clocking T
Lots of attention has been given to both. While it is likely there’s a mistake somewhere, it’s not likely to be obvious.
Matt
Taking an Occams Razor line …
We know that light can slow down as it passes through different mediums and we know that the distance measured between the A and B points in the experiment uses lasers at least in part.
Is it possible that in fact the reverse has actually occurred … that “light slowed down rather than neutrinos sped up” … that something in the facility is causing the distance measured to be “60ns greater” than it actually is and consequently the particles are indeed traveling at the speed of light just over an miscalculated apparently greater distance?
AK.
The distances were measured using the global positioning satellite system, using electromagnetic waves passing mainly through the vacuum of space; if the distances were wrong because of the speed of light being slowed down somehow, we would certainly have detected that, in other ways, long ago.
Hi, another thought (too much coffee!!). Is there a wave/particle duality issue here? The neutrinos are are apparently arriving 60nS too soon. Assuming the speed of light, this is equivalent to approx. 18 metres too soon. What if this was the wavelength (or factor of) of the neutrinos? If the muon detectors at CERN trigger when the neutrino leaves (tail of the wave) and the OPERA detectors trigger on the leading edge of the wave at arrival, this could create an error. I tried the maths and it appears to work for low energy neutrinos (~10 to the minus 27 eV) and depends on the detector functionality. I can’t make the maths work for the high energy (17GeV) neutrinos fired from CERN though . . .
Yes, I’m afraid this line of thinking will not get you anywhere…
However, regarding the 18 meter/61ns discrepancy, no matter how accurately the geodesy distance between locations was determined, neutrino propagation is subject to many potential influences including relativistic and dynamic gravitational variations involving the Earth’s geometry, the moon and the Sun.
Critically, the actual path taken by propagating neutrinos and especially the distance that they traversed cannot be definitively and precisely determined.
How can any approximation of traversal time of an indeterminable distance (estimated to be 730 km) be so precisely determined to be 61ns less than the speed of light in a vacuum over the presumedly identical distance?
While the actual traversal path and distance of neutrinos cannot be determined by any experiment subject to so many other potential uncontrolled influences, similar experiments at other geologically disperse facilities may at least produce unexplained variations in their supposed discrepancy with the speed of light – hopefully drawing attention to the fundamental indeterminable traversal distance issue.
I hope this explains my concerns, although I’m merely an inelegant layman…
If the experimental results from OPERA are correct then since we know from astronomical observations that the speed of neutrinos in empty space is the same as the speed of light then it must imply that the speed of neutrinos through rock must be marginally faster than the speed through empty space. A surprising result but not a contradiction of Special and General Relativity. It would be a great follow up test to try firing the neutrinos through the centre of the Earth.
Richard
On the contrary, Special and General relativity DO imply that any effect of the rock can only slow the neutrinos down. You can prove this. So even if the rock has a role to play, one must change Einstein’s equations somehow.
It strikes me that if we assume travel at greater than the speed of light in a vacuum is impossible, might it be possible in solid rock. If the neutrinos are passing their momentum to other neutrinos on the jouney they could plausibly travel at any speed, much like hitting a 732km long piece of rock with a very large hammer, the movement observed from one end in comparison to the other is faster than the speed of light.
The general idea that the rock might be part of the story isn’t a crazy one; most of our measurements of speeds are done in vacuum (or air, at worst.) Your specific idea doesn’t make sense, I’m afraid: in fact, your premise isn’t correct, for an interesting reason. If, in fact, you hit a 732 km piece of rock with a hammer, a shock wave will travel down the rock at the speed of sound in rock, and the other end won’t move until the shock wave arrives. (That’s how earthquakes work, right?! the earth cracks at one point, but it is seconds or minutes before the shaking arrives at any given place.) Since the shock waves travel below the speed of light, the effect of the hammer-blow travels slower than light. And the same would be true for any imaginable effect of neutrinos on matter, or vice versa: within Einstein’s theory of relativity, you can prove [looking at the mathematics of the equations] that any such effect would be at or slower than the ultimate speed limit. So if OPERA is right, we really do have to modify Einstein’s equations somewhere, even if the effect is somehow due to the rock.
Hi Matt, the second document I referred to suggests that gravity increases initially with depth (extract: “The value of g rises to a maximum of 999 gal at a depth of about 6 to 700km”) before then falling off . . . . The beam does not go below that depth in its journey.
I stand corrected on that point, then. An effect of the earth’s density profile no doubt.
Could the apparent speed of the neutrinos between CERN and Gran Sasso be affected by local gravity effects on the time observed by the neutrinos?
Here is my simple hypothesis (which may be way off of course!)
Gravity slows time (http://physicsworld.com/cws/article/news/41740), and gravity at Gran Sasso (1,400m below the surface) may be higher than at the surface. (http://onlinelibrary.wiley.com/doi/10.1111/j.2153-3490.1952.tb00998.x/pdf). CERN is approx 55m below the surface making an average depth of, say, 727metres for the neutrino beam..
Increase gravity, slow time and the neutrinos can go further while still travelling at almost the speed of light . . . .
Is this a possibility?
Chris Tolmie
[Actually, since the earth is curved, the neutrino beam is much deeper during much of its travels. And gravity far inside the earth is actually weaker than at the surface, so I’m not sure that part of your hypothesis would hold.]
In Einstein’s theory of gravity, there are certainly effects on time due to gravitational fields. And the GPS system, in fact, has to correct for it — it is a big enough effect that the GPS system would start giving you wrong directions very quickly if you didn’t account for it. But by the same token, the effect would be too small to give any effect on the OPERA neutrinos. Conversely, if you said Einstein is wrong and the effect is larger than he predicted, then you’d expect you’d already have seen signs of that in the GPS system.
Hi Matt,
I would like just to put to your attention the following paper http://arxiv.org/abs/1110.3783 The authors questions the statistical procedure used to compute the global emission PDF. He claims the order of the two operations of sum and normalization of the single waveforms used by the OPERA team is wrong. I did not catch all the details but the idea sounds interesting. What do you think?
Just noting that the updated paper (yesterday) with 20 individual neutrino events collected in recent weeks renders most arguments about the statistical process and proton extraction waveform moot.
Absolutely. That’s exactly why this cross-check is so important. See my new post http://profmattstrassler.com/2011/11/18/operas-next-act/ and detailed article http://profmattstrassler.com/articles-and-posts/neutrinos-faster-than-light/opera-comparing-the-two-versions/ .
Nice explanation, Matt, which has been picked up in the press. The few-ns spikes will mean that every neutrino will count (not just mainly the ones at the ends, as per the Palmer analysis).
I’m not sure, however, that MINOS replicates the OPERA set-up, as I’ve commented at http://t.co/lOzF0IYF . As far as I understand, if there’s funny business across the hadron stop, MINOS (up to now) won’t have picked it up, because they rely on near detector / far detector deltas.
John — thanks! could I ask you to explain that last bit about the hadron stop a little more slowly, for my non-expert readers (and hey, for me too…!) I would have thought that relying only on the comparison of the near detector timing with the far detector timing would be an *advantage*… that this would make MINOS less sensitive to OPERA’s potential difficulties.
Can you elucidate further on your Oct. 26th blog:
“And another result that was quite striking put a very powerful limit on the possibility that high-energy photons travel at a different speed from lower-energy photons — confirming to a new level of precision that the speed of light does not vary with the energy of the photons that make up the light.”
?
This seems to be the latest result : http://arxiv.org/abs/0908.1832 ; I thought, from what the speaker said, that they had an update, but maybe not. I’ll try to find time to explain this in more detail.
Technicolor in terms of Rishon Model was internally inconsistent and was ruled out 30 years ago theoretically. In my view, technicolor is simply wrong.
Hi Prof. Matt,
Have you had a chance to check out this analysis http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.5275v1.pdf by H. Bergeron which says that the statistical analysis by the OPERA team was wrong?
If the above is correct, then this new experiment should yield a null result
Opt,
The author of this paper has misunderstood the fitting method employed by OPERA which is described in this PhD thesis:
http://operaweb.lngs.infn.it:2080/Opera/ptb/theses/theses/Brunetti-Giulia_phdthesis.pdf
The maximum likelihood fit is done without binning and the interior structure of the proton pulses affects the fit, not just the edges, in contradiction to the assumptions made by the paper you referenced. The paper relies exclusively on information gleaned from the original OPERA preprint and makes incorrect assumptions about the fitting methodology.
Matt,
I have a simple question: How and when speed of light measured to the precision required for OPERA experiment?
Also, is it constantly measured again on a regular basis to improve precision?
Thanks.
Tony
I don’t know the precise answer; I do not have a reliable historical reference handy. It’s not hard to do it; Michelson reached the part per 100,000 level back in the 1920s, before we had electronics and lasers and atomic clocks.
Hi,
Just found your blog through the bbc article that cites it.
Here is a link to my talk with official statements as to what MINOS and T2K can do to verify the experiment:
https://indico.fnal.gov/getFile.py/access?resId=0&materialId=slides&contribId=43&sessionId=9&subContId=0&confId=4887
I personally gave the talk at the Advance Neutrino Technologies Workshop a few weeks back. I am happy to answer any questions on MINOS. You will also find a few slides with information provided by T2K, with their very official statement.
More informally, as far as we can tell right now MINOS can reduce the systematic error on our previous measurement by a factor of 2 (maybe 3) with existing data. MINOS has an interesting beam structure that will help significantly, plus it makes a measurement which is neutrino to neutrino as opposed to relying on the proton waveforms. Work is also in progress to take all new data with better timing which will help us verify systematics in old data. Finally, the timing of the whole experiment will be updated for future running in the MINOS+ era.
As I said, I’m happy to answer any questions,
Mayly
Thanks for your message! Will take a look, and we look forward to your updates.
Hi Matt,
Thanks for extremely informative the blog entry. I was wondering if you could attach names (and/or arXiv links) to all of the talks that you saw?
Thanks!
Chanda
The slides from the talks do not appear to be posted on the web, unfortunately. I am not sure there is any plan to do so: I spoke at the Sakata Conference yesterday and no one asked me about making my talk public. In the case of the experimental results, you are most likely to find the results by going to the experiment’s website (which you can always find easily using search engines). In the case of theoretical results, you are most likely to find them by going to http://inspirehep.net/ , a powerful search engine; you can type in the author’s name (syntax: “a strassler, m” where “a” is for author) and you’ll get all the papers in reverse chronological order. If you fail and need help with a specific one, I can try to assist over the weekend.
As far as I understood, minos will pursue both approaches, i.e. re-analyse existing data using improved methods (whatever that means), and taking new data with a setup optimized for this measurement.
The timescale for the re-analysis seems to be several month, the new data of course will take longer, 1-2 years.
Sorry I forgot all the details from the presentation, in particular I don’t remember how much improvement they expect for each analysis.
Hi Matt,
Great post (as usual)! Sorry to nitpick but I believe that T2K actually stands for “Tokai-to-Kamioka” (http://arxiv.org/abs/1106.1238).
Thanks for catching that blunder…