Matt Strassler 11/19/11
Ok, here’s the latest, as I currently understand it, on the OPERA experiment’s measurement that suggests (if it is correct in all respects) that neutrinos might be traveling faster than the speed of light, which in the standard version of Einstein’s theory of special relativity should be the ultimate speed limit that no particles can exceed.
Warning: For the moment not all numbers are double-checked, and there might, in places, be a number that’s off by as much as a factor of 10. But there should be no major errors. Also, I’m going to be restructuring the website a little bit and will add more cross-links between this article and the various OPERA articles and posts that I’ve put up. Apologies if there’s a bit of construction going on while you’re here.
The Old and New OPERA measurements
Since I’m going to be talking about OPERA’s first measurement (the one they announced in September, which started the public hubbub) and also about this more recent OPERA measurement that serves as a cross-check and got all the press this weekend, I’m going to give them the names OPERA-1 and OPERA-2 so we don’t get confused about which one I’m referring to.
The only difference between OPERA-1 and OPERA-2 (as far as I am aware) is an alteration in the duration and scheduling of the pulses of neutrinos that are sent from their source at CERN (near Geneva) toward the Gran Sasso laboratory (in Italy) where OPERA is located. I’ll discuss the pulses in much more detail below. You should keep in mind that the distance measurements and the time calibrations for OPERA-1 and OPERA-2 are the same, so if there’s a mistake there, it will be present in both versions of the experiment.
Neutrinos Are Tough to Catch
Now, the whole story about OPERA starts with a basic fact about neutrinos. A high-energy electron or proton or neutron won’t get very far in ordinary matter; it will smash into some of the atoms in the material, and soon come to a stop. But neutrinos, which are not affected by the electromagnetic force or by the strong nuclear force, are very unlikely to hit anything when they pass through ordinary matter. Although the probability of hitting something increases for neutrinos that have higher energy, and the CERN beam has neutrinos that are much higher in energy than, say, the neutrinos from the sun, it is still true that the vast, vast majority of the neutrinos in CERN’s beam pass through the 730 kilometers of rock between CERN and the Gran Sasso laboratory, and out into space, without hitting a single atom. So rare are the interactions (and so broad is the beam by the it gets to Gran Sasso) that only 1 in about 1,000,000,000,000,000 neutrinos (one in a thousand million million, or a quadrillion in American counting) from CERN’s beam have a collision inside the OPERA experiment! Fortunately it is possible to make large numbers of neutrinos. If it weren’t, OPERA wouldn’t ever detect any neutrinos at all.
In OPERA-2, which ran from October 22nd to November 6th, the number of neutrinos sent from CERN toward Gran Sasso was about 40,000,000,000,000,000 or so. This required over 100,000 pulses of about 300,000,000,000 neutrinos each. Over this 16 day period, OPERA detected about 35 neutrinos, of which 20 were detected well enough to measure them in detail. What you learn from this is important: for most pulses of neutrinos sent from CERN to OPERA, not a single one of those neutrinos hit anything in (or near) OPERA at all.
Now we’re ready to understand why OPERA-1 was such a problematic experiment in which to measure neutrino speeds, and why OPERA-2 was such an improvement.
OPERA-1: A Convoluted Way To Measure Neutrino Speed
First, let’s recall that OPERA-1 was not intended primarily to measure the speed of neutrinos. It was intended to study neutrino oscillations, looking for a process in which the muon neutrinos in the CERN neutrino beam oscillate into tau neutrinos, hit something inside of OPERA, and turn into a tau particle, which only tau-type neutrinos can do. For this reason, the highest priority was to have as powerful a neutrino beam as possible, to make the detection of this process possible. The fact that such a powerful beam would have properties that would make a measurement of the neutrinos’ speed more complicated was not a primary concern. The speed measurement was “parasitic” — something that could be done, admittedly with some difficulty, as a side project that would not affect the main goals of the OPERA experiment. That’s why, as we’ll see, the methods used in OPERA-1 look a tad inelegant.
You may have read in the press that OPERA-1 measured the speed of over 15000 neutrinos. That sounds impressive, but this statement is fundamentally wrong. The OPERA-1 measurement is a single measurement; the speed of neutrinos is only measured once. The method used is very convoluted, to the point that it requires many thousands of neutrinos in order for it to work. And that’s what I’m going to explain now. Bear with me. I promise the explanation of OPERA-2 will be much, much simpler by comparison!
Indeed, the need for 15000 neutrinos sounds a little odd. If you’re just trying to figure out if neutrinos travel faster than light does, wouldn’t even one speeding neutrino be enough?! I mean, if I’m an alien trying to find out whether human airplanes can travel faster than the speed of sound, wouldn’t I just need to see one example of a supersonic jet in action, and I’d know the answer was “yes”?
Well, that’s right: a single example of a super-fast neutrino (or perhaps a handful, just to make sure you didn’t make a mistake) ought to be enough. And in fact that’s how OPERA-2 works. But not OPERA-1. OPERA-1 uses a much more complicated method.
In my blog post on OPERA-2 from Nagoya, Japan, I described OPERA-1 as being done with neutrino pulses that were like a long blast on a horn, while OPERA-2 uses pulses like short clicks. The analogy has now been widely quoted in the press, but there is something important missing from the analogy, and that’s what I want to fill in now.
OPERA-1 claimed the neutrinos from CERN arrived about 60 nanoseconds earlier than expected — 60 nanoseconds before light would have been expected to arrive, assuming all measurements of the times and distances were right. (Nanosecond = 1 billionth of a second = 0.000000001 seconds.) But the tricky part is that in OPERA-1, each pulse of neutrinos sent from CERN was 10,000 nanoseconds long. That still doesn’t sound so bad — if you were to blow your car horn for a minute starting at exactly noon, and I was a kilometer (0.6 miles) away, I could still figure out how fast sound travels by noticing that I first heard the horn blast at 12:00:03. But with neutrinos it doesn’t work that way. OPERA doesn’t detect the whole neutrino pulse. In fact, it’s a lucky pulse that leaves any trace at all! Most of the time, when CERN sent a pulse during OPERA-1, its arrival at OPERA, 2.4 milliseconds later, was greeted with dead silence. Only very occasionally — only about 15,000 times over three years, which is a few times per hour when the experiment is running — a single neutrino from that giant pulse hit something in OPERA (Figure 1), allowing OPERA to detect it.
What can we say about the speed of this neutrino? Answer: not so much. We know it was produced somewhere within that long pulse, but we don’t have any way of knowing if this particular detected neutrino came from the start of the pulse that contained it, or the end of that pulse, or somewhere in the middle. In other words, we know when the pulse started to leave CERN, but we don’t know exactly when the detected neutrino left CERN! So we can’t measure its speed with much accuracy at all! And certainly there’s no way to tell that it arrived 60 nanoseconds early.
So how does the OPERA-1 measurement work, then? Roughly (and inaccurately) speaking — the actual methods used were more sophisticated than this, but that’s a level of complexity and confusion that we can safely skip since OPERA-2 makes them unnecessary — what was done is something like the following. Combine all the data together; imagine taking all the neutrino pulses from CERN and piling them (figuratively) on top of each other, lining up their start and end times. Then take all the neutrinos observed at OPERA-1 and figuratively pile them on top of each other, lining up the window of expected arrivals for every pulse. What you get is shown in Figure 2; a distribution that shows most of the neutrinos arriving in the expected window. But a few of them arrived early! None arrived late, and there seems to be a little gap at the end of the window. It is as if the whole pulse was shifted early by 60 nanoseconds. Notice that of the more than 15000 detected neutrinos, most of them don’t matter much; the most important neutrinos are the tiny fraction — much less than 1 percent of them — that arrived early. Also important are the very last neutrinos to arrive, which help indicate that the pulse isn’t wider than expected, but is just shifted early. So only a few of the 15000+ neutrinos really matter. Experts: what is really done is roughly a fit of the time distribution of the detected neutrinos to the time distribution of the protons that produced the neutrino pulses. Even that’s not quite as sophisticated as what they actually did, which was to assign a probability distribution for the departure time of each neutrino… etc.etc.etc.
But wow — especially since the reality was actually a bit more intricate than my simplified version of it — is this ever a complicated way to measure the speed of neutrinos, when all you have to do, in principle, is measure the speed of a handful of them really well! And doing it in this complicated way opens the door to all sorts of issues. If, for example, there’s a problem that crops up in your understanding of the shape of the pulses — exactly how they start, and exactly how they end — or with the fact that you measure the shape of the pulses of neutrinos by studying the protons with which you create them — subtleties with the method I’ve described might introduce errors that would create a fake shift. Maybe.
It was obvious after OPERA-1’s public presentation of its results that a much better measurement would ensue if very short pulses were used instead. In fact many physicists had this thought immediately (and one even asked about it during the question/answer session following the presentation.) But it wasn’t widely known (until I heard about it in Nagoya and reported it here) that there would be an OPERA-2, using short pulses.
OPERA-2: A Simpler Way to Measure Neutrino Speed
So — why are short pulses so much better?
Look at Figure 3. This is an entirely different technique: pulses only 3 nanoseconds long, and separated by hundreds of nanoseconds. That makes the pulses much shorter than, and the gaps between them much longer than, the 60 nanosecond early-arrival that OPERA-1 observed. So if OPERA-1 were correct, what would we expect? Instead of a window of expected arrival 10000 nanoseconds long for each pulse, OPERA would now have a window of expected arrival only 3 nanoseconds long. If neutrinos were to travel fast enough to arrive 60 nanoseconds early, then each pulse from CERN would enter and entirely exit OPERA long before the window of expected arrival even opened up. In short, if any speeding neutrino from the pulse were to be detected in OPERA, it would inevitably arrive early compared to the window of expectation, rather than, as in OPERA-1, typically inside the window.
This is exactly what OPERA-2 has observed (Figure 4). All 20 of the neutrinos they detected over the two weeks from October 22nd to November 6th arrived early, from as little as 40 nanoseconds to as much as 90 nanoseconds early, with an average of 62. It’s unambiguous. Every neutrino is arriving early. And since we know the departure time of each neutrino to within 3 nanoseconds (the length of the pulse that contained it), and its arrival time to within about 10 nanoseconds or so (the measurement isn’t perfect; see below), we can estimate the speed of each neutrino separately. That wasn’t possible in OPERA-1.
Avoiding a Jump to Unwarranted Conclusions
Now, two comments to prevent us from misinterpreting this observation.
First, what we have learned is that there was no mistake in OPERA-1’s technique of combining lots of data from many long neutrino pulses. We have not yet learned that OPERA-1 or OPERA-2 have correctly measured the speed of the neutrinos. For all we know right now, OPERA-1 and OPERA-2 are making the same mistake in their measure of distance or of time, or making some other subtle mistake common to both experiments. OPERA-2 is less open to criticisms of various types than OPERA-1, but the story is not over. Every experiment with a radical claim must successfully clear an obstacle course of objections. OPERA has now passed a very important test, but more tests lie ahead.
Second, the fact that not every neutrino arrives at the same time to within the 3 nanoseconds pulse duration — the spread of the observed arrival times is, as I mentioned, much wider — does not imply that the neutrinos are traveling at different speeds from one another. We have to remember that every experiment has intrinsic imperfections, which translate into imperfect measurements. It is OPERA’s job to tell us how imperfect their measurements are, and what they say is this: when they combine everything they know about the imperfections (“uncertainties”, we call them) in their measurements — uncertainties in the measurement of the moment when a neutrino interacts inside their experiment, and in the timing measurement on the pulse of neutrinos when it leaves CERN, and in lots of other subtle sources of imperfections — the result of OPERA-2 is consistent with all of the neutrinos traveling at the same speed, with all of the different arrival times due to imperfect measurements. That doesn’t prove the neutrinos are all traveling at the same speed, only that OPERA-2’s result does not prove that the neutrinos are not traveling at the same speed.
Looking Back, and Ahead
Now why, you may ask, didn’t OPERA just run with OPERA-2’s short pulses from the very beginning? Once they decided to run OPERA-2, it only took them about two weeks to gather 20 neutrinos, and make a more convincing measurement than they made with all three years of OPERA-1! Well, that’s my question too. I think it’s because, again, OPERA-1’s measurement, unlike OPERA-2, was a parasitic measurement off of an experiment that was trying to measure neutrino oscillations. Meanwhile, OPERA-2 is great as a dedicated measurement of neutrino speed, but because the total number of neutrinos passing through the detector in a given month is 60 times less than with the long pulses of OPERA-1, it is impossible to do the neutrino oscillations measurements while running OPERA-2. Since it means temporarily giving up the original goal of the OPERA experiment, it wasn’t until OPERA-1 saw a strong sign of faster-than-light neutrinos that there was enough justification to run with the short pulses of OPERA-2. But the whole thing only took two weeks! I’m not sure why they didn’t run OPERA-2 for a couple of months back in September, before they made any public announcement about OPERA-1; if it was that quick for them to check it…
What’s next for OPERA? I don’t know, but I know what I would like. Of course they need to think of other ways to cross-check their experiment to rule out other possible sources of error. But also I would like them to run OPERA-2 again, for about two or three months, and detect about a hundred neutrinos rather than twenty. And then I’d want to see a plot, for all of the neutrinos they observe, of the neutrino’s energy on one axis and the early-arrival time on the other axis. In fact I’d already like to see it for the data they’ve got from OPERA-2, though there may well be too few events to show the effect I’m going to describe now.
In Figure 5 I’ve shown three hypothetical versions of the plot I’d like to see, with three possible ways the data might appear. Let me explain the details of the three plots, and what makes them different.
A very important constraint on the speed of neutrinos is that we know, from measurements of neutrinos and of light from the 1987 supernova, that neutrinos with energies about 1000 times smaller than those in OPERA’s neutrino beam travel very close to the speed of light. (Even though the neutrinos are of a different type in OPERA’s beam than in supernovas [and in fact are mainly anti-neutrinos], evidence from neutrino oscillations indicates that they must all travel at the same speed for a given energy.) Click here for my description of the supernova neutrinos and how we can use them to learn about neutrino speed.
In other words, it would seem, from the supernova measurements, that if we took a beam of neutrinos with energies of 0.01-0.04 GeV, instead of 10-40 GeV as in CERN’s beam, and aimed it at OPERA, those neutrinos would arrive at the expected time to within better than 1 nanosecond. I’ve plotted those supernova-like neutrinos as a purple dot on the three plots, indicating that neutrinos with energy well below 1 GeV would not, according to what we know from the supernova, arrive significantly early.
Meanwhile, I’ve sketched how OPERA-2’s neutrinos, shown in red dots, might look on such a graph. In the first plot at upper left, I show you what would happen if Einstein’s equations were exactly right and if there were no mistakes in the OPERA measurement. In that case, the neutrinos would on average have an early-arrival time of zero — some being measured to arrive a bit early and others a bit late just due to imperfect measurements. This is what we might have expected, but this isn’t what happened in OPERA-2.
Instead, OPERA-2 might be showing us one of the two possibilities shown in the other plots. If OPERA-2 has made a mistake in the distance measurement, or a mistake in calculating times, or if there’s some subtlety with relativity that their calculations missed, these errors will probably be independent of the neutrino’s energies. All the neutrinos, regardless of their energy, will be early by the same amount (though as always the observed times will vary a bit, due to experimental imperfections.) In that case we’d see something like the plot at lower right in Figure 6. But if in fact OPERA’s neutrinos travel about 2 parts in 100,000 faster than light, then since supernova neutrinos have speeds that differ from light by less than a few parts in 1,000,000,000, neutrino speed must vary by a part in 100,000 between about 0.02 GeV and about 20 GeV. And if it varies that much, we should expect (though it is not guaranteed) that it will still be varying between 10 and 30 GeV, and between 30 and 60, etc. Therefore, if in fact Einstein’s theory of relativity needs modification, we would expect the data to give something like the plot at lower left in Figure 5, with OPERA’s lower energy neutrinos traveling slower and arriving later than its higher energy neutrinos. If we saw a plot that looked like that, with a clearly varying arrival time as a function of energy, that would make us all sit up very straight in our chairs. Very few mistakes that OPERA might have made could make the plot look like that.
Again, I don’t know why OPERA didn’t publish this graph already with their existing data; maybe they felt the result was too ambiguous. But I hope either (a) it isn’t too ambiguous, and they’ll make it public very soon, or (b) if it is too ambiguous, then they’ll run another round of OPERA-2 next year, and get a couple of additional months of data that they can combine with their current results to make the plot of energy versus arrival time.
So… The OPERA seems far from over. Whether OPERA’s measurements are right or wrong, OPERA-2 is a real step forward, both allowing us to narrow the list of possible problems with the experiment and offering hope for more insights down the road.
107 thoughts on “OPERA: Comparing the Two Versions”
Nice read, and I think written at a good level. Thank you! 🙂
thanks! glad to hear it.
Love the blog thanks for the posts! I’m a lay person so please forgive if my assumptions or questions are obviously wrong, not to mention my desire to jump from very tentative results to BIG questions.
My current assumptions:
Energy is energy there aren’t different types of energy.
The speed of light being a constant or ceiling is due to a particle being completely energy (there is no matter left to convert to energy).
Question: IF opera2 results were to stand would that mean that there must either be different “types” of energy that follow different rules or there is a way to make energy more… energetic? Or are there other, equally or more valid hypotheses?
Follow up question: Would quantum entanglement influence physicists in how they thought about the above problem? My underlying assumption is that quantum entanglement is the only other known example of information transferral breaking the speed of light.
Right about there being only one type of energy (though it can show up in different ways); more or less right about the ceiling, but it is better to say that, simply, as you add more and more motion-energy to a particle, so it has far more motion-energy than mass-energy, then its velocity approaches that of light.
Answer to your question: Probably it doesn’t mean what you suggest. The reason that energy is energy and there’s only one type is related to the apparent invariance of the laws of nature over time — that the laws of nature today are the same as those of yesterday and of tomorrow. But as for the correct hypothesis: I have no idea. That’s why (as described in the last part of my article) I’d like to see more information from OPERA-2; we need much more experimental guidance before we can guess what is going on.
Follow-up: I don’t think so. Quantum entanglement does not transfer information faster than the speed of light. It just seems like it must be able to, but you can prove it doesn’t. What quantum entanglement *is* doing is a matter of some discussion — I prefer to say that it is storing correlated information in a non-local way. But you still can’t transfer that information from one observer to another faster than light. Tricky point that I don’t think I know how to explain very efficiently. Obviously I’ll have to figure that out at some point if this site is to be complete.
Quantum entanglement does not transfer information faster than the speed of light… Tricky point that I don’t think I know how to explain very efficiently. Obviously I’ll have to figure that out at some point if this site is to be complete.
I personally am fond of the discussion in The Quantum Challenge by Greenstein and Zajonc. (Disclosure: I had sophomore “Modern Physics” from Zajonc, who used the first edition of this book as a text.) See section 6.4, “Does Quantum Nonlocality Violate the Principle of Relativity?”, which builds upon discussions of EPR and Bell’s theorems earlier in chapters 5 and 6. Even if you don’t find this reference useful for writing your own explanation, perhaps other readers of this site would enjoy checking it out in the meantime. (I suggest “checking it out” of a library if possible, since it has a typical textbook price.)
with respect to ongoing results, concerning Superluminal neutrinos from Opera, interesting idea is suggested
by E. Stefanovich: Superluminal effect with oscillating neutrinos by Eugene V. Stefanovich
I’ve seen many interesting ideas so far. Most of them are very… “interesting”.
Matt. Thank you very much for the article. I’ve been waiting ever since this Friday. Thank you for putting it up so quickly!
OPERA-2 has ruled out a whole class of possible mistakes of a statistical nature, and measuring an energy dependence would rule out a whole different class of possible mistakes. However, OPERA has said since the beginning that the range of energies present in the beam is not a wide enough “lever” to allow them to detect an energy-dependence in the velocity. What would make sense would be to run for, say, a couple of months in the OPERA-2 setup, then reduce the beam energy at SPS and run for another couple of months. SPS’s advertised maximum beam energy is 450 GeV, and OPERA has been running with a 400 GeV beam; I assume this means that the efficiency of the experiment increases with energy, so they wanted to run with the maximum energy at which the accelerator would run smoothly and reliably. (Higher energy would give better kinematic focusing of the neutrino cone. I assume the neutrinos’ interaction cross-section is also energy-dependent.) This presumably means that their detection efficiency would be poorer at a lower energy, and that energy would then have to be chosen as a compromise between detection efficiency and the need to get a long enough “lever.”
I don’t think I agree with your statement. OPERA has neutrinos from about 10 GeV to well over 40 GeV; that is easily enough, since the energy dependence is most likely pretty rapid. OPERA-1 didn’t have the lever, and to get it from OPERA-1 would require another 3 years of data; but a two month run of OPERA-2, increasing this month’s OPERA-2 data by a factor of 5, would most certainly provide the lever. And what I am suggesting would be much cheaper and much faster than what you suggest.
Hard to say. We have no idea of the form of the velocity’s energy-dependence. Under the hypothesis that the result is wrong, there is no energy-dependence. Under the hypothesis that the result is right, we have no viable theory that can predict the energy-dependence and that also is consistent with other data (such as the nonexistence of Cherenkov-like processes). Meanwhile, I suppose they will want to resume work on neutrino oscillations. There are presumably a lot of grad students and postdocs at OPERA who would like to go on the job market with some successful work on neutrino oscillations, rather than starting their scientific careers with the FTL result, which will almost certainly turn out to be wrong.
Excellent article. My take is that even without the energy vs time distribution you can clearly see that all neutrinos are distributed as what looks like a flat time distribution around 60 ns with a +-25 ns jitter. This seems to point to your argument of an experimental error in the time sync/distance/delays. In my mind it seems less appealing as physics right with opera-2 than with opera-1.
Also, I have now heard contradictory arguments that the expected jitter is either 50 ns or 10 ns. If it’s 50, I’d say is a mistake somewhere. If it’s 10 it makes a bit more appealing.
Anyway this is the difference between some neutrinos arrived early and all of the neutrinos arrived early.
I’m not sure the statistics is sufficient for your conclusion, though I am sympathetic, as the distribution of neutrino arrival times is a bit flatter than one would expect were the uncertainty due to a purely statistical effect. But in any case, we need more information.
The paper says that their master clock at OPERA has a granularity of 50ns (20MHz), so you would expect a flat time distribution with +-25ns jitter.
On the other hand, it’s a bit disconcerting that they are measuring an effect on the order of 50ns with a clock whose granularity is 50ns; maybe there’s an off-by-one bug in some software counter somewhere.
The OPERA master clock puts out pulses on a 50 ns period. It runs on an electronic oscillator (not GPS or atomic standard). That causes it to have 50 ns of jitter relative to the accurate timing from the PolarX+Cs unit, which is a GPS receiver combined with an atomic clock. But that has nothing to do with the expected timing resolution of OPERA’s times. The times recorded are *not* simply the times coming out of the OPERA master clock. They are high-precision times from the PolarX.
I’ve seen many interesting ideas so far. Most of them are very… “interesting”.
Sometimes “interesting” ideas, although they may be incorrect ones, could at least said to be original. It is from new and original ideas that big changes usually come….
Wow, some pretty interesting results!! I would love to see a post speculating on the implications of this experiment being true and how this could benefit us mere humans..
The problem is that I don’t know the implications. We will need far more information to know — much more experimental information, with many more details, and some time to think about what it means. History teaches us that the implications of discoveries take a long time to emerge. When atomic energy levels were first discovered, no one could have predicted that quantum mechanics would lead to a computer industry and cell phones.
Thanks again for this beautiful article, it makes me very curious too to see such an energy – early arrivel time plot 🙂 …
And I’ m still wondering if certain QG or “spacetime structure” effects, only kicking in with increasing energies of the neutrinos, could explain an energy dependence of the late arrival time (if it is there) ?…
Just to say, I`m expecting no answer …
(you mean “early” arrival time.) The problem is to push this structure’s effect into the neutrinos and keep it out of the electrons and muons, for which the constraints from Cerenkov-type radiation are very strong. That’s not easy to do, theoretically. Notice I don’t say impossible, because I don’t know that it is impossible.
Fantastic summary, Matt.
As I observed at http://t.co/bafnPw37 the distribution of arrival times pretty well matches exactly the 50-ns-wide bin representing the synchronization of their 20 MHz master clock. (That the “jitter” in this clock sync is assumed to be uniformly distributed over [-25,+25] ns is confirmed by their use of the square root of 12 in the statistical error term — that’s from the variance of a uniform distribution.)
Any further intrinsic dispersion in travel times (i.e. because of a dispersion in neutrino speeds) would be convolved with this rectangular distribution; and again here, as with OPERA 1, there is no sign of such spreading. (I concur wholeheartedly that it would be good for them to run it for a few months to get the statistics as high as possible on this, but it may be a case of diminishing returns for them, politically.)
So we’re pretty well left with your third graph, “More plausible that OPERA has an error.”
The only caveat I place on that is that I am still advocating the possibility that the 60-nanosecond advance occurs close to the production point (whether through some condensed matter effect through the 18-meter hadron stop, as speculated at http://t.co/lOzF0IYF , or due to a spacelike intermediate particle, as speculated at http://t.co/PpSokwji ).
OPERA (or even CERN themselves) could test for this fairly easily, I believe, if they temporarily replaced the muon detectors at the far end of the hadron stop — which are currently designed to test alignment, not timing, as far as I understand — with ones that could measure arrival time to this level of precision. They would then either find that the 60-nanosecond advance is occurring between the proton spill point and the far end of the hadron stop — in which case it could be investigated completely at CERN with the muons, with no need for detecting neutrinos 730 km away at all; or they would find the muons arriving exactly at the time expected assuming the speed of light — in which case they will have then purified the 60-nanosecond result of OPERA 1 and OPERA 2 to the 730 km neutrino trip to Italy (and eliminating any potential systematic errors between the proton extraction point and the end of the hadron stop as well).
The latter can then be tested by MINOS at Fermilab. The former probably can’t (but it wouldn’t need to be, anyway, for the reasons outlined above).
Considering the (common wisdom) negativity about FTL, it’s no wonder that the team hesitated before running OPERA 2… They needed to make sure no plausible grossly overlooked small error lurked…
There is something to what you say, but it seems to me that the issue that OPERA-2 addressed was in fact one of the places where a “plausible grossly overlooked small error” might have lurked. I can see them taking the other point of view; that since this was a good way and a quick way to check one aspect of their result, why not do it before going public? For this reason I suspect the reasoning was different.
You shouldn’t have the view that this “common wisdom” negativity is somehow knee-jerk or political. Please read: http://profmattstrassler.com/2011/10/17/why-scepticism-in-science-isnt-just-politics/
is there no possibility of the neutron beam being fired in the direction of another alternative receiver at a different distance? Would this not then demonstrate truly that if there is a speed factor, then we would see a different shift other than 60 nanoseconds?
(1) neutrinos, not neutrons… very big difference! Neutrons are like protons (made from quarks and antiquarks and gluons) and they don’t travel very far through matter. Despite the similar name, neutrons and neutrinos are utterly different in every respect except that they are both electrically neutral.
(2) neutrino “receivers” (“detectors” is the word we use) are expensive, large devices. OPERA is not an experiment you can pack in a truck and take somewhere else. And the CERN neutrino beam (which is complicated to make in the first place) is aimed at the Gran Sasso laboratory; you can’t aim it anywhere else without building another tunnel, which is expensive.
(3) another measurement like OPERA’s would potentially suffer from the same challenges of measuring distance and time, so it wouldn’t help resolve the problem. What you really want is an experimental design that makes it easier to measure the times and distances. One approach would be to have two detectors, one behind the other, not so far apart and with extremely precise and accurate timing. I’m not sure such a design is practical with current technology; that’s for the experts in neutrino detectors to say.
Thanks for the reply (and respect for the time you spend communicating this to the uninitiated). Meant to write neutrinos by the way (honest!). On point 3 though – if this gets repeated anywhere else, there will be a different distance between source and detector and we would expect to see a (distance related) difference in the time discrepancy, right? If the discrepancy did correlate to the distance then would this conclusively prove that we were indeed seeing a real effect?
In principle, yes. But in practice, any change in the neutrino beam might introduce other small time delays that would have to be corrected for. Ideal would be to have another neutrino experiment in front of or behind OPERA, using exactly the same beam. But unless something with very high time precision (so it can measure very short time delays) can be built very near CERN, I think that’s not geographically practical.
You need to measure the way of the neutrino’s and not the distance of the two points of 730 km.straight.The distance of the way that the neutrino’s takes is a bowed line caused by gravity. The straight line is impossible to make for the neutrino’s. The gravity orders to take the bowed line and this way is a liitle bit longer. Exacly the time wich is above the speed of light.
The gravity is the mistake that is made in the measurements.
The speed of light is C (300.000 km/sec) when you take the bowed line and above C when you take the straigt distance of 730. km. The two distance are different but not the speed of light.
Thank you for the physics lesson. Let’s check your results. First, how much does the path bow? The neutrinos travel the distance between CERN and OPERA in 2.4 milliseconds. In 2.4 milliseconds, how far does an object fall? And is the difference enough to explain the 20 meters of path-length that would potentially cause a 60 nanosecond discrepancy? What do your equations say?
Thank you Matt Strassler
The straight line is 730 km. Time is: 1 |0500 ns – 60 ns. This way in impossible because of the neutrino’s are by gravity taking the bowed way. So this straight way is not to calculate with , but this time is even faster then the speed of light!
The bowed line is 730 km. + 60 nanosec. to go. 105o0 ns : time is divided by C exactly the bowed distance. The arrive is at the same place!
730 km/300000=0,0024 seconds and this is exactly the difference between the straight and the bowed line in time to go.
Great example of using simple numerics to check the relevance of an effect. 2.4 milliseconds at 9.807 m/s^2 gives a fall distance of around 28 microns (1/2 g t^2). Hendriks, we’re concerned with apparent path length differences more like 60 ns * c = 18 meters. Therefore, the additional pathlength due to falling of the neutrinos is too small to matter by orders of magnitude.
Distance/speed=time. 730/300.000=0,0024 sec. (the straight line). This straight line is not possible to go by the gravity of the earth. You need to know and notice this impossibility. The neutrino’s has to take the bowed line and this way is 18 meters longer, but the end is at the same place as the end of this straight line. The bowed line is even 18 meter longer. This means 60 nanosec =18/300.000. The time is the same so the way 730 km must be 18 meters longer! The bow makes the way 18 meters longer but the arrival is at the same place. Calculate with the impossible straight way gives the speed faster then light © and this is the mistake that is made.
As I and someone else already commented, your calculation is incorrect, by a very large factor. The bowing effect is very tiny and completely negligible.
I think the reason for not running the “Opera-2” experiment has more to do with the fact that CERN is a giant bureaucracy and you just don’t flip switches without paperwork and budgeting. It might well have taken the release of the preprint to kick loose the money to run “Opera-2”.
Also, the Opera folks probably consider themselves to be pretty good scientists, and the confirmation from the second experiment proves that the methodology used in Opera-1 was good science. It would not surprise me if they in fact thought it WAS good science before running Opera-2, hence the release of the preprint. And unlike the broader community the scientists in Opera had a lot longer to get comfortable with the data and with the methodology. Combine this kind of comfort with their results and the likely difficulty in getting bureaucratic approvals and you get history.
You may very well be correct. Still… it was an extraordinary claim and the need for this simple cross-check should have been clear pretty early on.
The upcoming results from MINOS should provide a good check on the OPERA measurement since MINOS uses a lower energy neutrino beam than OPERA and uses two neutrino detectors, one close to the beam source and one 734 km further downstream. MINOS uses the time difference of interactions in the near and far detectors which eliminates any systematic errors arising from neutrino beam formation.
Is it possible to reconcile the 1987A supernova neutrino results with OPERA by assuming that superluminal neutrinos radiate energy until they slow down to c but that the effective distance over which this occurs much greater than the CERN to OPERA detector distance? That is, the superluminal radiation mechanism exists but is suppressed?
That’s an interesting suggestion. I have to think about it.
Since OPERA sees no distortion in their neutrino beam (and ICARUS confirms with high precision that there is no distortion) I think your comment is not, in the end, relevant, because if any effect impacted the supernova neutrinos it would presumably do so for the OPERA neutrinos to an even greater degree. But you are right that there may be a logical loophole there. At the moment I see no way to thread a theoretical idea through it, but one should not forget that the loophole might exist.
Very nice summary.
I’ve been giving updates on OPERA from time to time in a modern physics course I’m teaching and your blog is quite helpful.
Is there more information available about the possibility of a vL/c^2 term suggested by Elburg? It seems striking that this term yields about 30 ns by applying this rather simple effect that comes from special relativity. Whether or not it is relevant to OPERA would depend on how their GPS clocks are configured. Do you know if the collaboration has provided more information about this?
I do not know, I’m sorry. And I’m not sure I would properly understand the information if they did, unless I heard it explained in detail.
Regarding the “striking” fact that vL/c^2 gives 30 ns; well… I’ve already seen several other effects that give something of order 60 ns or 20 m. Numerical coincidences like this are part of the theorist’s arsenal, and my experience is that most of them are just that: coincidences. They start to look less striking when you’ve seen a hundred of them in other contexts! 🙂 Personally I suspected that OPERA-2 would not agree with OPERA-1, so I wasn’t worrying too much about such things yet. I’ll probably pay more attention now, but there are still many possible sources of problems, and most of them (GPS subtleties, surveying errors, electronics time delays, etc.) are ones about which I would be unable to form an intelligent opinion.
Many details of the OPERA experiment including clock synchronization are available in this PhD thesis from one of the collaborators:
OPERA uses Common View GPS to synchronize the two clocks at CERN and Gran Sasso. A GPS satellite sends a pulse and the arrival time of this pulse is measured at the two locations. Knowing the difference in distance from the satellite to the two locations gives a correction to the time difference of the two clocks. The velocity of the pulse to the two locations is independent of the motion of the satellite (this is the basis of special relativity after all). The effect that van Elburg describes is not applicable to Common View GPS. The clock on the satellite is not directly used for synchronization.
Thanks. Showing yet again that you should get all your details straight before writing a paper and going to the press with it.
Thanks for pointing out this thesis. OPERA is doing a good job making details available. It’s certainly possible that vL/c^2 from SR turns out to be just a numerical coincidence in this case; GPS common mode receivers can be configured to take it into account, if it in fact matters.
But I do think the question of whether there is a possible parallel between the OPERA timing measurements and the famous “trapped train in a tunnel” special relativity paradox is a question worth digging into a bit deeper.
I have a question regarding the methodology used by the OPERA collaboration that I hope you may be able to answer.
Could you explain how OPERA’s methodology compensates for the relativistic effects of site-to-site time measurement changes, due to Earth-Moon-Sun interactions and differential gravitational tidal forces in particular?
More succinctly, have they accounted for differential relativistic tidal forces? This is not to be confused with Newtonian tidal forces that do not compensate for time dilation effects.
Dear Prof. Strassler,
I basically agree with your ideas presented in fig. 5: If early arrival times do not depend on energy, then one has a strong argument that neutrinos’ speed does not exceed the speed of light. However, even in this case, there is still a room for a genuine superluminal effect. Let me explain what I mean.
Most discussions of the OPERA experiment assume that particle transformations along the path CERN->Gran Sasso occur in a local fashion, so that one has an uninterrupted trajectory, which starts as a proton leaving SPS accelerator and ends up as a neutrino in the OPERA detector. Let me illustrate this assumption by the following graph, where “O” denotes interaction/decay vertices.
CERN ===== proton =====>O=====> meson =====>O===== neutrino =====> OPERA
The usual logic is that we know (from decades of observations) that both protons and mesons travel subluminally. Therefore, the only way to explain the early arrival of neutrinos is to assume that their speed is higher than “c”. However, there is a different possibility. Nobody has ever seen neutrino leaving the meson decay vertex. So, there is a possibility that neutrino trajectory starts not exactly from this vertex, but 18 meters away from it:
CERN ===== proton =====>O=====> meson =====>O[18 meters gap]===== neutrino =====> OPERA
In this case all particles travel slightly slower than the speed of light, but neutrinos arrive in the OPERA detector 60 ns earlier than expected due to the 18 meters head start. Note that in this case there is no contradiction with SN1987A supernova observations, and the Cohen-Glashow objection (neutrino energy loss due to Cherenkov-type bremsstrahlung) does not apply.
I don’t know which physical effect can be responsible for the 18 meter gap between the points of meson decay and neutrino emergence, but at the first sight I don’t see any contradiction with fundamental postulates, such as conservation laws and the principle of relativity.
Well, your idea is a creative one, but I think you would violate many more principles this way: Isotropy (how does the neutrino know to appear ahead of the meson instead of behind it?), Locality (something happening at one point would have an effect a finite distance away without anything moving between the two points) and Causality (from the meson’s point of view, the neutrino would appear BEFORE the meson had decayed — remember that in Einstein’s theory, whether two things are simultaneous depends on your reference frame.) You’d also give up Local Conservation of Angular Momentum (which requires that angular momentum be unchanged from moment to moment, whereas, from the meson’s point of view, the angular momentum of the always-spinning neutrino would appear from nowhere) and Local Conservation of Energy and Momentum for similar reasons. Frankly I suspect nature will not force us to give up all of these things at once.
There’s one thing I don’t understand. Since neutrinos are so light, given the energies, wouldn’t one expect them to be travelling very close to their limiting speed (even if that speed is greater than the speed of light). In particular, wouldn’t even the 1987a supernova neutrinos also be traveling close to their limiting speed. Not only that but the difference in speed for the energy ranges considered should be negligible.
If OPERA is correct and an energy dependence is observed, does that mean that either the limiting speed for neutrinos is far far greater than the speed of light (or they don’t have a limiting speed at all)?
You are absolutely right that to make the 1987 supernova results consistent with OPERA, it would seem that the neutrino velocity as a function of energy must be very odd. For low energy neutrinos it must be the usual one predicted by Einstein (so that all the 1987 Supernova neutrinos travel at just about the speed of light) but then increase somehow for neutrinos with energies larger than about 0.1 GeV. It is not a very natural situation. You can however write down equations that will do this…
Well, maybe — probably — OPERA is wrong. Or maybe there is something subtle we don’t understand yet that makes OPERA’s neutrinos behave very differently from those from the supernova (e.g. maybe the rock has an important role to play? doesn’t seem likely but … ) Or maybe nature is just very odd.
As for what the ultimate limiting speed of neutrinos might be — or whether there is one — the answer to that question (if OPERA is correct) will await further experiments and theory. It’s not possible to guess with so little information.
Matt, thank you for your comments. (I also would like to thank Joseph for mentioning my work.) These are valid concerns. I don’t have answers to all of them, but possibly I can clarify a few points.
My published model is one-dimensional, so it doesn’t address the isotropy question in full. However, in this model the muon neutrino “knows” that it must be created ahead of the meson, while the tau neutrino always lags behind. This fact follows naturally from the chosen form of the neutrino interaction Hamiltonian. I believe that similar anisotropy can be reproduced in the full 3D model. The rotational invariance does not seem to be an obstacle, because there is a preferred direction – the momentum of the unstable meson. These are just my guesses, because no 3D calculations have been done yet. These calculations (taking into account neutrino spin) should also answer your question about the angular momentum conservation. If the interacting model is Poincare invariant (and I believe that such a model can be constructed), then all conservation laws will be satisfied automatically. I am working on it.
I also have something to say about your point on causality. There is a section 4.1. in the paper which discusses this point. My argument is that boost transformations of observables in any interacting system (e.g., in the system “unstable meson + its decay products”) must depend on the strength of interaction. (This dependence is ignored in Einstein’s special relativity, which makes this theory approximate.) Again, no detailed calculations in different frames have been done for the meson decay, but there are reasons to believe that taking the interaction-dependence of boosts into account one can show that the simultaneity of the neutrino emergence and the meson disappearance is retained in ALL moving frames. My belief is based on quantum relativistic theory of action-at-a-distance presented in the book arXiv:physics/0504062. See, especially, chapter 11.
I don’t think you’ve discussed the relevance of the OPERA measurements to the Standard Model Extensions developed over the last 2 decades – notably the Díaz-Kostelecký ‘Puma Model’:
There’s a good August review by Díaz:
…but I can’t find any published discussion of how well the energy-dependence of the coefficients of Lorentz violation in (variations of) the Puma Model fits the OPERA results so far.
PS: Could the non-standard dispersion relations for oscillating neutrinos in the Puma Model get round the Glashow-Cohen objection?
Dear Prof Strassler,
I am Professor of Electrical Engineering at The University of the West Indies in Trinidad and Tobago. I have been reading your very interesting comments about OPERA and whether or not the experiment will ultimately stand up to scrutiny. Your comment about the situation faced by the researchers who made the superluminal neutrino claim (inability to sleep etc) is especially interesting since the idea that light speed is the ultimate speed is really universally accepted. A related aspect of special relativity which you have mentioned in this discussion and which is also universally accepted is the light speed invariance postulate according to which the speed of light is constant in all inertial frames. It may surprise you to know that it has been known for some time by those operating the GPS that light travels faster West than East thereby violating this postulate as applied on the surface of the Earth. However they mask this result by an adjustment they refer to as a “Sagnac correction” in order to continue the illusion that light speed is really constant. On the basis of this light speed variation, special relativity has in fact been invalidated by the GPS. Of course the scientific mainstream does not countenance any such claim and routinely rejects papers that question relativity. Since on the basis of rigorous scientific evidence I am convinced that special relativity is wrong, I am hoping that the OPERA experiment holds up since it takes 150 scientists at a world-class institution to get the attention of the scientific community about the possible invalidity of special relativity.
Good luck with your claim!
The Sagnac effect due to working in a rotating (non-inertial) frame is well understood, and accounted for in OPERA’s analysis. From page 16 of arXiv:1109.4897: “Corrections were also applied to take into account the Sagnac effect caused by the rotation of the Earth around its axis. This yields an increase of TOF_c by 2.2 ns, with a negligible error.” Those interested in understanding the Sagnac effect can start at
As a general rule, Wikipedia is a great place to start learning about any given topic, but a terrible place to stop learning.
“As a general rule, Wikipedia is a great place to start learning about any given topic, but a terrible place to stop learning.” A wonderful turn of phrase, one that I shall borrow in future.
Well, to give credit where it is due, I myself borrowed it from Andy Cohen.
Knowing Andy as I do, it is no surprise: vintage A.C.
In the first version of the OPERA paper I do not think the Sagnac effect was included. As far as I am aware this was done in the second version only after it was raised by one researcher. But two questions arise: (1) Why isn’t the effect of the orbital movement of the Earth also included? (2) Why in the vast majority of light speed experiments conducted on the surface of the Earth no such correction is included?
Why isn’t the effect of the orbital movement of the Earth also included?
Continuing to quote from page 16 (of 1109.4897v2), “The Earth’s revolution around the Sun and the movement of the solar system in the Milky Way induce a negligible effect, as well as the influence of the gravitational fields of Moon, Sun and Milky Way, and the Earth’s frame-dragging .”
I think the essential negligibility of all of these effects even over a 700+km baseline is reasonable justification for neglecting them in typical experiments. This is probably also why they were not discussed in v1, as you point out: they have almost no effect on the results.
Jarah Evslin makes some very good comments in this paper (written before OPERA-2): “Challenges Confronting Superluminal Neutrino Models,” http://arxiv.org/abs/1111.0733 . He points out that one of the only really model-independent ways to test whether the OPERA result is right is to test whether the time anomaly is proportional to the baseline. This type of test, unlike a test of the energy-dependence, gives the possibility of conclusively proving or disproving the existence of the effect. Unfortunately MINOS and OPERA have baselines of essentially the same length, and although T2K does have a different (shorter) baseline, T2K’s timing would need to be upgraded. And a negative result from T2K wouldn’t be conclusive, because J-PARC is only ever supposed to get up to 50 GeV, as opposed to SPS’s 400 GeV. One proposal mentioned by Evslin is to produce neutrinos at CERN and detect them in Finland, which would give a much longer baseline. (What present or future facility is there in Finland?)
It seems to me that it would be very helpful for OPERA to do runs with proton energies of 50 and 120 GeV, the same as the energies available or planned at J-PARC and NuMI. If they do this and see an energy-dependence in the effect, then it’s strong evidence that the effect is real. If they do this and get a non-null result, it means that MINOS and T2K are not wasting their time if they put time and money into getting their instrumentation improved to the point where they can try to reproduce the OPERA result, and it also opens up the possibility that T2K can check the proportionality to the baseline. Confirmation of baseline-dependence with a CERN-to-Finland experiment would also be extremely strong evidence that the effect is real.
I really think it should already be possible to do a lot at OPERA with the present neutrino beam’s energy dependence. Don’t forget there are neutrinos with energies between 10 and 100 GeV in the beam; only the average is 20. If they were to see an energy dependent effect, that would raise the plausibility of their measurement, and that in turn would justify changing the CERN beam’s energy to get more information. But they should do the simple and inexpensive things first.
About Finland; I don’t think a longer baseline is a good idea. I think two detectors, one behind the other, is a better idea; and I think more precise timing on a shorter baseline is a good idea.
Taking into account the 50ns jitter of the master clock, isn’t the fact that Opera2 has a 60 ns distribution for the 20 events already pointing to the direction of neutrino energy independence of the TOFs ?
By the way, thanks for maintaining this beautiful blog.
Yes, it does point that way (another commenter also emphasized this.) But why guesstimate when you can measure?
I think you’re missing the point made by Evslin. His point is that energy-dependence is actually not a definitive test, whereas baseline-dependence is. A test of energy-dependence can support the FTL hypothesis but cannot disprove it.
I understand that the beam has lots of different energies in it. However, the spectrum from SPS is wildly different than the spectrum you’d get from J-PARC. There is so little overlap between the neutrino spectrum you’d get from J-PARC and the one you get from SPS at 400 GeV proton energy that it seems difficult to me to justify spending time and money at T2K to gear up for a velocity measurement when it’s not clear that they should even expect to see an effect with the lower energy spectrum.
I did not miss Evslin’s point (it is pretty obvious to anyone and everyone, I am not sure why you ascribe it to him personally.) I was answering your point, not Evslin’s, when addressing energy dependence.
Regarding Evslin’s point: I think that a longer baseline is a bad approach. What you gain in timing sensitivity you lose in beam intensity and distance uncertainties, and I suspect you’d be much better off with a shorter-distance experiment with better timing precision and a simpler distance measurement.
Regarding your remark: Nothing in what I said was suggesting that the T2K measurement was obviously a good idea, so I’m not sure why you brought it up. My point was that a re-run of OPERA-2 would be inexpensive and quick, and would at least offer the opportunity of seeing clearly whether there is a sign of energy dependence. If there is one, then we can talk about changing the CERN neutrino beam.
In my suggested model (the latest reference is http://vixra.org/pdf/1110.0052v3.pdf) the magnitude of the effect (60 ns) does not depend on the baseline length. The model also suggests that MINOS experiment (which measures the propagation time between two neutrino detectors) will not see any deviation from the speed of light. The model also says that tau-neutrinos (which are partners of muon-neutrinos in the oscillation process) arrive in the OPERA detector 120 ns LATER than muon-neutrinos. Finally, this model is consistent with the weak energy dependence observed so far. If all these predictions are confirmed in future experiments, then the model should be taken seriously.
The Sagnac effect applies to light in GPS clock synchronization because light traveling on Earth is considered to travel in the ECI at c. There is really no basis to believe that this holds true for a neutrino or that such an adjustment is necessary in speed determination on the surface of the Earth. Moreover, the authors have given no basis for the claim in the paper that the Earth’s revolution around the Sun and the movement of the solar system in the Milky Way induce a negligible effect, particularly as the Earth’s rotational speed at the relevant latitude is about 330m/s while its orbital speed is 30km/s.
Your suggestion that this adjustment is negligible in typical experiments is actually incorrect. The isotropy limits of modern Michelson-Morley type experiments is well below the level where such corrections would manifest themselves in changes in light speed yet no adjustment is made.
My own view is that the speed should be calculated on the basis of fixed distance (baseline) divided by the time of travel across this distance with no Sagnac adjustment.
Matt, is it possible to repeat the OPERA-2 with less energetic neutrinos, say from 10 meV (similar to the SN1987A data) to the 10 GeV range? I bet that \epsilon = (v_\nu – c)/c \propto E^2 in this range… If the superluminal effect diminishes with the energy in the OPERA experiment, and accords with the supernova bound, it seems to me that no time or distance sisematic error could be assessed to OPERA.
There should be no need (especially if the effect goes like E^2) since the OPERA beam contains neutrinos that vary over quite a range in energies. That was precisely the point of the last portion of my article above. We just need to run OPERA-2 for a longer period and plot arrival time versus energy.
Did Opera team say if they checked for correlations of some kind between GPS satellite positions and neutrino times of flight in this new set of 20 events? (e.g. neutrinos seemingly faster when the satellite is closer to LNGS than CERN… things like that)
That might possibly point out to an overerestimation in the path length, or to some subtlety in SR not properly taken into account….
GPS is not used continuously throughout the experiment. It is used initially to fix the distance from the above-ground station at the accelerator to the above-ground station at the detector. It is also used as one of multiple techniques for finding the fixed delays involved, such as the delay in the 8.3 km optical fiber at Gran Sasso. Once the distance and the various time delays have been measured, they are no longer actively updated during the experiment, so GPS is not being actively used while data acquisition is going on.
Not that I am aware of.
Oops, sorry, the above post is wrong. The cable delays are found using two techniques: (1) portable clocks and (2) two-way propagation times. GPS isn’t involved in either of these. The way GPS comes in to the timing is in the synchronization of the clocks at CERN and Gran Sasso. This does indeed seem to be actively maintained by the two PolaRX+Cs clocks at the surfaces, described in fig. 5 of the OPERA paper. However, I think these are not just atomic clocks but atomic clocks regulated by GPS. It doesn’t seem plausible to me that the atomic clocks would drift by tens of nanoseconds, and it seems even less plausible that this would not be noticed when they were regulated against GPS. IIRC this technique is supposed to be good to a fraction of a nanosecond.
Ben, do you consider that the Sagnac correction introduced in the neutrino time measurement in version 2 of the OPERA paper is justified? If so why and why are the effects of the orbital and galactic motions negligible as claimed by the team?
Yes, I think they were correct to include the Sagnac effect for the earth’s rotation but not any correction for the orbital and galactic motions.
GPS uses a system of coordinates called Earth-Centered Inertial (ECI). This is a coordinate system whose three spatial coordinates rotate with the earth, and whose time coordinate can be thought of as the one that would have been determined by Einstein synchronization via radio waves carried out between a clock at the center of the earth and a radio station at the point of interest that is *not* rotating with the earth (assuming you could transmit electromagnetic waves through the earth, which you actually can’t). Because of this, clocks that are synchronized to ECI time through GPS are not synchronized with one another. You can call this a Sagnac effect (because it’s due to the earth’s rotation), but essentially it’s just a Lorentz-transformation term $vx/c^2$ between the frame that’s not rotating and the one that is. (This is pretending that the baseline is east-west, which it isn’t, but that’s the basic concept.) The correction is necessary, but is only 2 ns, which is almost negligible here.
The reason that the orbital and galactic motions should not be corrected for is that by the equivalence principle, any nonrotating, free-falling frame is a valid inertial frame in GR. Since the earth is free-falling, a nonrotating frame fixed to its center is a valid inertial frame, and there are no observable gravitational or inertial effects whatsoever that would allow the detection of its motion by local measurements such as CNGS/OPERA.
(Just to clarify the previous post, the effects of the orbital and galactic motions are not just negligible, they are identically zero, by the equivalence principle.)
“The Sagnac effect applies to light in GPS clock synchronization because light traveling on Earth is considered to travel in the ECI at c. There is really no basis to believe that this holds true for a neutrino or that such an adjustment is necessary in speed determination on the surface of the Earth.” This sounds like a misconception about how relativity works. Relativity predicts exactly the same kinematics for an ultrarelativistic particle as it does for a ray of light. Effects like the Sagnac effect are fundamentally spacetime effects, not effects involving light per se.
Since the conversation has returned to GPS clocks, I thought I might post this comment.
Suppose you have two clocks. You place them next to each other. A pulse from a satellite passing by at velocity v is used to synchronize them. An observer on the satellite and you will agree that the clocks are synchronized. The key is that the clocks are not separated by any distance.
Now the two clocks are moved so they are separated by a distance L on the ground. You could do a measurement on the ground from midway between them by flashing a light to synchronize the clocks again if you like. And suppose the orbit of the satellite passes directly over both clocks. To an observer on the satellite, even when the satellite is midway between the clocks, the two clocks will not appear to be synchronized. The difference will be vL/c^2. This comes directly from the SR lorentz transformation equations. and these equations have been tested extensively. (Note, this is different from the time dilation effect on the the ticking rate of moving clocks which is much smaller.)
Now suppose you decide to use the pulse from the satellite to synchronize the clocks. For simplicity, you arrange to use a pulse when the satellite is the same distance from each clock. The clocks reset when the pulse arrives and now to an observer on the satellite they are synchronized.
However, if you were to repeat the synchronization measurement on the ground, you would find the clocks are not synchronized. Their synchronization would be out by vL/c^2. For OPERA the term would work out to 30 ns if the satellite orbit lines up with the detector and accelerator.
So the question for OPERA is whether this needs to be taken into account, and if so, has it been? Depending on the orientation of the satellites used, or the configuration terms in their gps disciplined atomic clocks, it may not matter at all. However, I did not notice an explicit mention of vL/^2 in the student’s thesis on the OPERA timing, but a thesis is not a paper and the discussion was extremely technical. It seems very likely that within a collaboration of 100+ sharp people, that this well-known effect would have been addressed. Still, it would be nice to hear confirmation from OPERA that this is in fact the case.
Actually, the spatial coordinates of the ECI do not rotate with the Earth and the GPS clocks (whether stationary in the ECI or moving relative to it) are synchronized with each other! I am not persuaded by your explanation for the Sagnac correction in OPERA nor your invocation of the Equivalence principle to argue for the non-inclusion of the effects of orbital and galactic motions. As for the Sagnac being a spacetime effect, I guess we can agree to disagree.
“Actually, the spatial coordinates of the ECI do not rotate with the Earth and the GPS clocks (whether stationary in the ECI or moving relative to it) are synchronized with each other!” Thanks for the correction. I should have said ECEF, not ECI. In any case, this does not affect the logic of my argument, since the reason for needing the Sagnac effect is that the GPS *time* is stated in a nonrotating frame, and this is true in both ECEF and ECI.
I am surprised that there is so many people trying to “fix” relativity, and so few actually checking OPERA’s time (T) and distance (D) measurements. The drunkard and lamppost phenomenon perhaps? I have seen a few papers discussing special relatvity corrections to T (is the jury out on that?) but none addressing D; which seems to be the most delicate one, since is a complex triangulation that inclues two earth-satellite radio legs (is that correct?). Is there any critical, detailed, independent review of this measurement?
First, there is a lot of work going on that people are not publishing. (For instance, if I personally had an idea about a mistake that OPERA might have made, I would not write a paper on it, because it has no physics content; I would simply send them an email and ask them whether they had might have made this mistake. Dozens of physicists have done this; none has yet convinced OPERA that they goofed.)
Second, except in the case of a relativity mistake, it would be hard to actually check OPERA’s distance measurement or time measurement oneself; all one could do is point out a possible source of problems, and ask OPERA if they had overlooked it. We can’t ourselves redo the surveying that was needed for the distance, or carry the clock from CERN to Italy, or check the electronics timing delays, to see if we get the same answers. At certain points we simply do have to rely on those who did the experiment. Of course, we still won’t write the result into the textbooks (especially one this radical) unless other experiments confirm it.
“Is there any critical, detailed, independent review of this measurement?” See page 9 of the OPERA paper: “This time link was independently verified in 2011 by the Federal German Metrology Institute PTB (Physikalisch-Technische Bundesanstalt)  by taking data at CERN and LNGS with a portable time-transfer device commonly employed for relative time link calibrations .”
“I have seen a few papers discussing special relatvity corrections to T (is the jury out on that?)” No, the jury is not out on that. Van Elburg simply didn’t bother to learn anything about GPS before writing his erroneous paper.
“but none addressing D; which seems to be the most delicate one, since is a complex triangulation that inclues two earth-satellite radio legs” The largest source of uncertainty in D is in the measurement done inside the tunnel at Gran Sasso using old-fashioned optical surveying techniques. They did this from both ends of the tunnel and got results that agreed very closely. See p. 100 of Brunetti’s thesis http://www.bo.infn.it/opera/docs/theses.html to get an idea of how this worked. They did two measurements, from the two ends of the tunnel, and the results agreed to within a few cm. The error bars on D are 20 cm; to explain the 60 ns anomaly, you would need an error of 20 meters.
Thanks for the reply. I have checked Brunetti’s thesis. Needless to say I lack the knowledge to understand the details of their T and D measurements. However, it seems to me that they have placed substantial faith in the GPS, e.g. trusting that the atmospheric corrections broadcast by the satellites are correct. Is such faith warranted? For example, the GPS radio signal is not only slowed by the atmosphere but is also bent by it; and 20 metres in 11 km is only 0.1 degrees. Are the atmospheric corrections that accurate?
Has there been any other comparable distance on Earth that was measured this way and was checked by some other method, independent of GPS, to 20 meter accuracy?
As for error bars and statistical accuracy: repeatable does not mean correct. I keep thinking of Hubble’s primary mirror: laser-checked umpteen times and believed be perfect with 100 nm accuracy then, but later found to be off by about 1 cm.
Jorge Stolfi wrote: “However, it seems to me that they have placed substantial faith in the GPS, e.g. trusting that the atmospheric corrections broadcast by the satellites are correct. Is such faith warranted?” Yes. The random and systematic errors in this type of high-precision GPS measurement are on the order of centimeters. The error needed in order to explain the anomaly is 20 meters. GPS is not the limiting factor in the distance measurement. The limiting factor is the old-fashioned surveying that they did with tripods inside the tunnel.
Jorge, if the signal speed is slowed to c´, is that means that the estimated distance D´ = c´t is larger than the true distance D = ct? If this is the case, with the reported superluminal neutrino speed v_nu > c, then the measured neutrino time delay would be delta t = D´/v_nu – D/c =
= (c D´- v_nu D)/(v_nu c) = (c c´- v_nu c ) t/ (c^2 + epsilon c^2) = (c´- v_nu) t /(c (1+ epsilon)
so that c´ = delta t * c(1+epsilon) /t + v_nu = delta t * c (1+ epsilon) c´/D´+ c (1+ epsilon) =>
=> c´ ( 1- delta t * c (1+ epsilon) /D´) = c (1+ epsilon)
c´= c (1+ epsilon) D´/ (D´- delta t * c (1+epsilon))
So, for delta t = – 60 ns, epsilon = 2.37 * 10^(-5) and the OPERA value for D´, we have
c´= 1.0000237 c D´/ (D´+ 60 * 10^-9 * c (1.0000237))
With D´ \approx 730000 m and c \approx 300000000 m/s, we have
c´ = 0.999999 c
What is your value for c´ in the atmosfere plus bending? I suppose that it would be greater than this necessary value to produce the OPERA measurement. Or not?
Here is a rough sketch of my understanding of the GPS measurements: http://www.ic.unicamp.br/~stolfi/EXPORT/projects/neutrinos/GPS-atmospheric.png except of course that there are N (4 at least) satellites radio-visible from both sites rather than 2. To obtain T and D, a series of GPS measurements is made. Each measurement is the reception at CERN and LNGS (actually at the Gran Sasso tunnel entrance) of the same data packet sent by one satellite. The raw data obtained from each measurement are: the time tS when the packet was sent, the time tC it was received at CERN, and the time tL it was received at LNGS, each in the respective local clock; plus the orbital position and velocity of the satellite at tS. By grinding enough of these measurements one obtains the distance D and the time difference tCL between the CERN and LNGS clocks. The latter makes it possible to compute the neutrino flight time T. Is this correct?
“By grinding enough of these measurements one obtains the distance D and the time difference tCL between the CERN and LNGS clocks.” You describe it as if every time a neutrino flight time T is measured, a D is also measured. That’s incorrect. Although they did monitor changes in D over time, those changes (due to tectonic drift) were too small to affect the neutrino results, and they didn’t remeasure D for every neutrino event. The measurement of D is not a pure GPS measurement, and in fact the GPS part of the measurement is a negligible source of error compared to the optical surveying done with transits inside the tunnel at Gran Sasso. I also don’t think they’re using GPS to compute every T. GPS is used to regulate the two atomic clocks at the surface at the two sites, so that they stay synchronized.
To Ben: my understanding too is that the “grinding” is off-line and the results are D and tCL, not D and T.
To Osame: The index of refraction of air at 1 atm is about 1.0003 (perhaps 1.0004 in extreme climates). It means that the extra delay when GPS signals cross the atmosphere should be between 10 nsec and 30 nsec depending on the the inclination angle of the satellite relative to the local zenith. So, an hypothetical systematic error in the atmospheric correction would probably be less than that (unless they got the signs wrong 😎
Anyway, since atmospheric delays affect the two arrival times tC and tL in the same direction and roughly by the same amount (because the satellite’s inclination is roughly the same at both sites), my intuition tels me that such delays would have little effect on the computed CERN-LNGS clock shift tCL, and hence on T.
As for the computed distance D, I have no intuition; its relation to tC and tL seems to be rather complicated, I cannot guess what would be the effect of systematic errors of a few nsec in them.
By the way, it seems that radio echos are a known source of errors for stationary GPS position mesurements. If the LNGS receiver was actually detecting the echo from a nearby mountain face, instead of the direct signal, the computed D could easily be a a few *kilometers* off. Is it possible that echoes from the surrounding terrain are confusing the receptor just enough to offset D by 20 meters?
Perhaps we should wait until replicas of CERN and OPERA are placed in solar orbit somewhere beyond Pluto. That may bring the geodesy computations within grasp of mere mortals. 😎
“Anyway, since atmospheric delays affect the two arrival times tC and tL in the same direction and roughly by the same amount (because the satellite’s inclination is roughly the same at both sites), my intuition tels me that such delays would have little effect on the computed CERN-LNGS clock shift tCL, and hence on T.” This is standard GPS stuff, well understood, not anything unique or special about this experiment. You’re wasting your time by speculating about explanations that require experts on time transfer to be complete idiots who have never thought about basic, obvious issues. The time transfer was also independently checked by a method that did not depend on GPS clocks; see p. 9 of the OPERA paper and T. Feldmann et al., Advanced GPS-based time link calibration with PTB’s new GPS calibration setup, 42nd Annual Precise Time and Time Interval (PTTI) Meeting., operaweb.lngs.infn.it/Opera/publicnotes/note134.pdf
“By the way, it seems that radio echos are a known source of errors for stationary GPS position mesurements. If the LNGS receiver was actually detecting the echo from a nearby mountain face, instead of the direct signal, the computed D could easily be a a few *kilometers* off.” Again, this is standard GPS stuff. The difficulties you’re talking about occur with a certain type of GPS unit used in a certain way. The experts on geodesy are not idiots.
Your scoldings are deserved of course…
(but the folks who manufactured Hubble’s mirror too were world-class optics experts… 😎 ).
Thanks for the replies, and all the best…
Many people think that the synchronization of clocks is the problem of measuring the speed of neutrinos in the experiment Opera.
A paper is currently submitted for publication shows that this time of 60ns is consistent with the relativistic effect Shapiro.
The title of this article is:
Additional delay in the Common View GPS Time Transfert, and the
consequence for the Measurement of the neutrino velocity with the OPERA detector in the CNGS beam [
you can read in :
I really liked your idea for graphing the increases in power vs. what should be an increase in speed.
You seem to be saying the power of neutrinos from the 1987 supernova was 0.02 GeV. But how do you know that was a consistent power from the supernova across space? Was there no loss? I thought an exploding star could generate more energy than the CERN neutrino beam?
It’s really important to be very careful with language here.
Each neutrino from the supernova carries an energy… not a power. A particle has energy, momentum and mass. Some of those neutrinos have an energy of 0.01 GeV, some 0.04 GeV — 0.02 is a rough average.
The supernova puts out an incredible number of these neutrinos, and the total energy emitted by the supernova is vastly more than the sun will put out in its entire lifetime. Since that energy is emitted in 10 seconds or so, the power (energy per unit time) of the supernova is simply staggering. BUT: that’s not relevant.
The only thing that matters, as far as the neutrino measurements is concern, is what each neutrino, separately, is doing. The neutrinos, as they travel through space, have no contact with one another; they’re just independent actors. And indeed, each neutrino loses no energy as it crosses space — energy is conserved — unless it hits something (which happens to very few) but in that case it won’t make it to earth anyway.
The beam from CERN is a beam of neutrinos which individually have much higher energy than the neutrinos from the supernova. However, the number of neutrinos in the beam is minuscule compared to the number of neutrinos released by the supernova, so the total energy of the supernova in neutrinos is much higher than the total energy of the CERN beam. But the effect I want to measure has nothing to do with the total energy of the beam — it has only to do with the energy of each neutrino individually. So from the point of view that matters, the CERN beam is a moderate-total-energy beam of high-energy neutrinos, as opposed to a high-total-energy beam of moderate-energy neutrinos.
Hello there. Your blogs about Neutrinos and Opera experiment helped me a lot with my end-of-the-highschool assignment! You have really smooth style of writing.
Thanks for these blogs, keep it up with good work 🙂
Glad to help.
Scene Report – gucci Believed An Essential In today’s times classic uggs on sale http://www.asianacts.com/ugg-classic-tall-paisley-5852-boots-118_120/
Casino games can also be played outside of casinos for entertainment purposes, some on machines that simulate
gambling. The only disadvantage of online casino can be easily resolved.
You can either play for fun, or play for real money.
Don’t be scared to wander into somewhere else on the continent to grab any quests available there – heck, maybe it’s time to try a new continent.
One can not get a faster than light neutrino from the Opera-2 experiment; you can from Opera-1. The reasoning is stright forward.
Way cool! Some very valid points! I appreciate you writing this write-up and also
the rest of the website is really good.
They can either specialize in residential or commercial locksmith
services and whatever your service need may be, you should
be careful in choosing the locksmith to hire. The vast majority
of commercial job opportunities a lock tech does include mass lock installs or access control systems.
Each locksmith has been expertly trained in the different types of locks and their functions.
It is advisable to get a local locksmith professional because it will be simpler for him
to arrive and help solve the trouble. Going into a
new place blindly can be too much of a culture shock and
may only result in stress that could be avoided.
What do you know about this locksmith device?
Locksmith San Francisco
Is it reliable enough or not? Just tell me what do you know about it. Thanks in advance.