Of Particular Significance

Some Comments on the Faster Than Light Neutrinos

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 09/23/2011

The OPERA experiment has now presented its results, suggesting that a high-energy neutrino beam has traveled 730 kilometers at a speed just a bit faster than the speed of light.  It is clear the experiment was done very carefully.  Many cross-checks were performed.  No questions were asked for which the speaker did not have at least a reasonable answer.

Some preliminary comments on the experiment (none of which is entirely well-informed, so caution…)

  • They have to measure times and distances to an accuracy of 1 part in a few hundred thousand. This is hard, not impossible, and they have worked with metrology experts to carry these measurements out.
  • The timing measurement is not direct; it has to be made in a statistical fashion. The proton beam pulses that make the neutrino beam pulses [read more about making neutrino beams here] are not sharp spikes in time, but are distributed in time over ten thousand nanoseconds. (Recall the measured early arrival of the neutrinos is only 60 nanoseconds.) And so one cannot measure, for each arriving neutrino, how long it took to travel. Instead one has to measure the properties of the proton beam pulses carefully, infer the properties of the neutrino pulses, measure the timing of the many arriving neutrinos, and work backwards to figure out how much time on average it took for the neutrinos to arrive. This sounds tricky. [Thanks to Ryan Rohm for calling my attention to this a few days ago; however, see his comment below.] That said, the experimenters do show some evidence that their technique works.  But this could be a weak point.
  • I am a bit concerned about the way in which statistical and systematic errors are combined. The theory for statistical errors is well-defined; one assumes random fluctuations. In combining two statistical errors E1 and E2, one says that the overall error is the square root of E1-squared + E2-squared.  This is called “adding errors in quadrature.”  But systematic errors are much less well-defined, and it is not clear you should combine them in quadrature, or combine them with statistical errors in quadrature. The OPERA experiment combines all errors in quadrature, and says they have a measurement at 6 standard deviations away from the speed of light. If you instead combined systematic errors linearly with statistical errors (E1+E2 instead of as above) you would get 4 standard deviations. If you combined all the systematic errors with each other linearly, and then with the statistical error linearly, you would get 2 standard deviations (though that is surely too conservative). All this is to say that this result is not yet so significant that different and more conservative treatments of the uncertainties would all give a completely convincing result. This is just something to keep in mind when evaluating such an exceptional claim; we need exceptional confidence.

Now, some brief comments on the theoretical implications, in addition to what I said in Tuesday’s and Thursday’s posts.

You may have heard some people say that neutrinos traveling faster than light would mean that Einstein’s theory is completely wrong and implies that instantaneous communication and even time travel would be possible. Balderdash! this is loose and illogical thinking. If Einstein’s theory were exactly correct AND neutrinos could travel faster than light, then this would follow. But if neutrinos travel faster than light, then Einstein’s theory is wrong at least in some part, and until you know exactly how it needs to be modified, you can draw no such conclusions.

As I emphasized in yesterday’s post, Einstein’s principles should be divided for current purposes into two parts:

  1. There is a universal speed limit.
  2. Light travels at this speed limit.

Observing that some neutrinos travel faster than light could mean there is no speed limit at all, or simply that light does not travel at the speed limit. And there are much more complex logical possibilities. Some of these would require only rather small (though still revolutionary) adjustments to current theoretical physics. Others would be more disruptive to current thinking. But it is certainly not true (as some physicists have said in public fora) that it requires going back to the drawing board as far as theoretical physics and Einsteinian relativity are concerned. It will depend on which of the various logical possibilities is actually operating. Hyperventilating about the impending collapse of existing theoretical physics is a tad inappropriate at this time.

In fact, over the years quite a few theorists (including very mainstream and well-respected scientists at major universities) have considered the possibility that Einstein’s principles might be slightly violated, and some of that work from the 1990s (and probably earlier [?]) suggested that studying neutrino properties would be a good way to look for signs of such violations. So it is not as though these issues have never come up before in theoretical physics… though I think it fair to say that everyone has viewed this exciting possibility as a long-shot.  (And most of us probably still do, until this experiment is confirmed.)

There is certainly more to say about the theoretical situation, but I am still learning about what the experts already know.

Share via:

Twitter
Facebook
LinkedIn
Reddit

93 Responses

    1. Tunneling through the trapping waves of framented realities back & forth.Deep & shallow/hot & cold/only to notice that ONE has the control.Weigh fire measure wind bring back a day that has already past/back again.Edras 2:4,5 .

      Sent via the Samsung Galaxy S™III, an AT&T 4G LTE smartphone

  1. I think that what you said was very logical. However, what
    about this? suppose you were to create a awesome headline?
    I am not saying your content isn’t good, however suppose you
    added a post title to maybe get people’s attention?
    I mean Some Comments on the Faster Than Light Neutrinos | Of Particular Significance is a little plain.
    You ought to glance at Yahoo’s front page
    and see how they write post titles to get people to click.
    You might add a video or a related picture or two to grab readers interested about everything’ve
    got to say. Just my opinion, it could bring your posts a little livelier.

  2. Thank you for the auspicious writeup. It in fact used
    to be a amusement account it. Look complicated to more introduced agreeable from you!
    By the way, how could we keep in touch?

  3. Numbers are the Supreme Court of science. However Godel proved that we may not prove everything using numbers. Physics needs numbers. There must be Physics Foibles.

    1. Numbers are not the supreme court; they are the tools used in the court. The comparison of quantitative prediction with quantitative data is the process.

      In physics, we do not attempt to prove the validity of physics. Physics, as with all science, rests on certain unprovable assumptions. The Godelian problems arise only when you try to prove the validity of your logical system using your logical system. That isn’t something scientists attempt to do… and Godel tells you that if you tried, you’d fail.

  4. Hy.What do you think about Entangled particles and J.Cramer experiment? Do you think OPERA result ,confirm it? ashkbous-Tehran-Iran

  5. Hi. I knew about experiments that used two atomic clocks, one beside the other, that both was marked all times in equally precission. But when one of them was lifted up in 30 cm, its time was noticeably slower because gravity action (warping of space-time). Could you tell me if the Opera experiment was considered it?

    1. Atomic clocks are astonishingly precise. The real issue in your question is the word “noticeably”. I believe this effect, though noticeable, is far too small to affect the OPERA measurement (2 parts in 100,000.) Moreover, even if I were wrong, the impact would be that the two clocks would gradually drift apart. In this case what OPERA measured would have changed over time. But their three years of data show essentially the same effect in all three years (albeit with low statistical significance.) So I doubt this could be a significant factor.

  6. Let’s imagine this: there are 3 satellites sending signal to land. All these are affected by gravitational waves in space-time. This means that they are not located exactly where their inside records indicates, where they will inform this data to gps laboratories in land. Likewise, in addition that the instrument’s laboratories were inaccurate, so the space-time’s crust has been deformed, at this distance between two labs. We include in this problem the two clocks on land and the three cloks in space, despite the internal satellites records no longer indicates the reality of its location and time. So how can detectors makes any accurate calculations with so many inaccuracies data? I believe that this errors increases in a exponentially way and not so in a simple way. Thats my thinking. regards.

    1. The fundamental problem with this explanation is that it can screw things up only by smearing out the measurements — sometimes the distance would be too large, sometimes too small — but that’s not what OPERA observes. OPERA observes a systematic shift of the entire collection of neutrinos forward by 60 nanoseconds (if time is the issue) or 20 meters (if space is the issue.) You haven’t explained why you think you could get that effect. Moreover, the distance was measured twice, at different times, and the same answer was obtained to something much smaller than 20 meters.

  7. correcting: I’m talk about the inside recorded positions of these satellites, in relation to land

    1. But the gravitational waves don’t just affect the crust; they affect the very space and time over which the GPS signals travel. I can’t see how you’d avoid a very big effect, not only on the GPS measurement as applied to OPERA, but on the whole GPS system. Maybw I still don’t really understand what you have in mind.

  8. Yes, I agree, but just we must remember that the GPS-satellites are moving at almost 40000 km/h and they do not recognise any deformation at the crust.. Anyway, thanks

  9. Hi Matt

    I thought about it, and remembering the gravitational wave detectors at the ends of basements that depend on these rays, the deformation of space-time to detect something irregularity in the photons trajectory. Isn’t Opera test a gravitational wave detector, where that deformation of space, shortened his course/rote (and no velocities greater than C) and the researchers did not realized it? Is there any possibility about it? thanks

    1. Well — can we really imagine a gravitational wave that (a) would move OPERA by 60 feet (20 meters) relative to CERN, (b) would not oscillate, so that we wouldn’t get just as many neutrinos arriving late as arriving early, and (c) would have no effect on the GPS system?

  10. Either the photon is significantly massive (almost ruled out by eveything else), or there is no relativity. Which is funny, because Einsten came to the Lorentz transformations and the existance of a universal speed limit (“only at first sight inconsistent with relativity” 1905), through the relativity principle. Now we want to keep the universal limit, but get rid of the relativity from which it was derived. As for the Higgs mechanism, its possibly on its deathbed, and so unlikely to be in a state to come to the aid of anything else.

    1. I’m sorry, but I profoundly and completely disagree with all of these statements. The most serious mistake is your statement about the Higgs mechanism. You should not confuse “not finding the standard model Higgs particle” with “killing the Higgs mechanism”. First, here are many experiments that support the Higgs mechanism — many high-precision tests from the LEP collider, and most importantly the decay properties of the top quark. Second, not finding the Standard Model Higgs particle will *only* mean that the simplest possible version of the Higgs mechanism is wrong… not that the Higgs mechanism itself is wrong. Third, the Higgs mechanism does NOT predict there will necessarily be a Higgs particle at all! Please read my posts on this point: http://profmattstrassler.com/articles-and-posts/the-higgs-particle/implications-of-higgs-searches-as-of-92011/

  11. If the conclusion of the neutrino measurement is that photons simply move slower than the universal speed limit, doesn’t that imply that photons have a small, non-vanishing mass? And if so, what of the stringent experimental upper limit on photon mass? And then what are the implications for the gauge invariance of QED — that a higgs mechanism is needed here too?

  12. Hi, I have this back-of-the-envelope calculation. I haven’t checked for details because of lack of data, time, expertise, etc…

    I suggest the problem is with the calculation of distance among points A and B, which is about 700 kms. We simplify to 1 GPS satellite for position measurement. The height (h) of such satellite is of about 20.000 kms. Imagine that they do the synchro using a non-inertial system of reference, and so they ignore the rotational translation of Earth. Now they wrongly change system of reference for an inertial one (which sees the Earth moving). Let’s see all the data, for measuring space location of point A:

    According to the height h referenced, the time of flight for the radio signal (in order to measure position) to travel from GPS satellite to point A is roughly 0.07 seconds ( h / c, where we simplify to vacuum for a great percentage of the travel). The tangential speed of Earth due to rotation around its axis is of about 0.3 kms / s. at that latitude. Then the point A translates, in an aproximate straight line, 0.07 * 0.3 = 0.021 km = 21 meters from the previous point in this inertial system. If the translation is deviated a given angle theta respect to the direction connecting A and B, we could have an effect which may be consistent with 18 meters, which is the adventage of neutrino respect to an hypothetical competing photon. I ignore how this gross error could have been made, if the signs and theta angle would be the appropriate, and if this hypothesis could be consistent with different measurements along different times during the experiment. Extremely unlikely, anyway, but wanted to share it. Thanks.

    Best regards,

    C. S. S.

    1. The problem with you analysis is that even consumer type GPS devices regularily are better than 20m in horizontal position, so…

  13. reposted – sorry about typos.

    I share your concerns about “adding errors in quadrature” for both statistic and systemic errors. I do believe that putting systemic, non-random errors in equations that assume normal distribution is plain wrong. But also, in today’s world experiments contain many parts that may rely on calibrated sources that do not necessarily produce random fluctuations, such as synchronized atomic clocks or GPS. If more than one part of the experiment depends on such a source, the errors, while appear random, may be correlated and therefore cannot be treated as normal distribution. I’d appreciate your further thoughts on this.

    1. I agree that such types of errors might be present and might even be dominant at OPERA. In the meantime I asked one of the best young statistics experts in particle physics his opinion, and he said something similar to you, but added that he didn’t know enough about OPERA’s details to make anything other than a vague statement.

      1. Thank you for your feedback.
        It is interesting that all blogs and all comments I’ve seen on this subject so far assume only two possible outcomes: an error would be found or these results would be confirmed.

        I’ll go on a limb and speculate that a further review of the data from OPERA will nether find a specific error nor provide a level of proof required. Instead, current ambiguity of the results would remain.

        The problem may be not with physics nor metrology but with handling combined uncertainty of all different parts of this complex experiment. Analytical approach works only with linear systems and perhaps few other special cases. With non-linear systems, it is very hard and most of the time plain impossible to establish a definite relationship between fluctuations in components behaviour and overall system performance.

        The fact that the authors of the OPERA paper combined statistical and systemic errors as if they were dealing with linear system and normal distributions tells we that they did not question if their statistical models for estimating errors are applicable at all.

        I do hope that future experiments are designed with better understanding of inherent limitations of analytical statistics.

  14. I share your concerns about “adding errors in quadrature” for both statistic and systemic errors. I do believe that putting systemic, non-random errors in editions that assume normal distribution is plain wrong. But also, in today’s world experiments contain many parts that many rely on calibrated sources that do not necessarily produce random fluctuations, such as synchronized atomic clocks or GPS. If more than one part of the experiment depends on such a source, the errors, while appear random, may be correlated and therefore cannot be treated as normal distribution. I’d appreciate your further thoughts on this.

  15. I’m friends with Amara Graps (Astronomer @SWRI, used to teach Astronomy at Italy univ), & she brought up a similar issue. That of latitude disparity (which causes Coriolis Force):

    https://plus.google.com/113489431205673849613/posts/avuf2273yJD

    I read it, but most of it is beyond me to be honest. Their baselining of the distance and timing looks good, but it’s hard for me to pick it apart since I’m not familiar with that sort of thing. I’ll leave that to the pros.

    I wondered if perhaps the change in latitude between CERN and OPERA might account for it; they rotate around the center of the Earth at different velocities (this is what gives rise to the Coriolis effect; the Equator spins at 0.463 km/sec, but the poles don’t move at all). That gives rise to a small relativistic effect of time dilation. But this effect, if I’ve done the math right, is far far too small to account for the 60 nanosecond anomaly (the change in rotational velocity from CERN to Italy is about 21 meters/sec). There’s a longitude effect as well but I didn’t do that math. I’m tired. 🙂

    Anyway, I wonder if things like that were considered; it wouldn’t be the first time a really obvious effect was forgotten when something big came up in physics.

    I’ll be curious to hear what the professional particle people have to say tomorrow. I’m not alone in thinking there was probably an error made someplace – and I’m not casting aspersion on the people involved; this is damn tough work to do – but at this point I don’t know what it might be.

  16. Stupid question: Did they account for the earths rotation and orbit when making their calculations?

    They were aiming at a moving target after all.

  17. If I understand correctly, the proton pulse probability distribution function is the average of millions of waveforms (of the order of 10^13 protons/pulse, 10^19 protons delivered per year). There are 16,000 neutrino events recorded, or of the order of a neutrino event recorded per 100 proton pulses.

    Pulses are timed relative a trigger.

    The theory is that the neutrino pulse at the detector is of the same shape as the proton pulse at the origin, only timeshifted relative the trigger for time of flight, and attenuated because of our very poor sampling rate that results from the fact that neutrinos interact very weakly.

    Suppose the proton pulse was a perfect square pulse, i.e, step-up, plateau, step-down. Then what would be relevant to the delay estimate would be the earliest neutrinos (and perhaps the last).

    There are features in the proton pulse, but given that they are small compared to the height of the plateau, they should not contribute a lot to the statistical estimate.

    Suppose 1 in a 100 proton pulses start early relative to the recorded trigger. This would account for the earliest 1% of neutrinos. Averaging over 100 proton pulses makes the 1 in a 100 proton pulse invisible.

    Instead of averaging over all events, I think the analysis should be redone – find the earliest (and last) few percent of the neutrinos. For each such neutrino event, pull out the actual proton pulse waveform from the database of the millions of proton pulse waveforms. Try to statistically locate the rising (falling) edge of the neutrino pulse.

    Another way of seeing this – is suppose the physics turns out to be that 1% of the neutrinos are superluminal and 99% are not. In the plateau of the pulse, the 1% is buried by the background of non-superluminal neutrinos. Discrimination is possible only near the edges of the pulse.

  18. Wonder why they could not just make a run with lower beam energy and watch the effect go away (as the supernova would predict).

    1. Interesting question. I do not know the details of the neutrino beam they are using, but at lower energy one also has a lower collision probability, so one needs an even more intense beam. That might not be possible. Note also this measurement took 3 years. It is not just a matter of flipping a switch, measuring the answer, and declaring victory overnight. That said, measuring the energy dependence of the effect, both in new and older data, will certainly be something physicists will be focusing on.

  19. According to Feynman diagram’s wouldnt photons be able for a short while to decay into an arbitrary number of neutrino and anti-neutrino-pairs of the same group or depending on their energies into a combination of lower energy photons and other particles with their anti-particle. Wouldnt this resulting in a short time delay for the light each time such a decay and recombination happens (the massless particles that appear for a short while travel slower than light). Such a thought would probably also imply that light of lower energies will be closer to the real speed of light than light of higher energies since for higher energies more short-time decays with higher masses are possible (this also gives an instruction how to conduct an experiment on that(maybe since i do not know any exact numbers in what kind of delay such higher order interactions would result in) ). Do i make any kind of basic thought errors? No symmetries should be violated by such kind of light behaviour should they? I would be very glad if i got a comment.

  20. If the results of the OPERA neutrino experiment are confirmed, are we sure that this will mark the end of relativity theory?
    Not necessarily: as mentioned above it is possible to postulate that there is an universal speed limit c (the one that appears in the relativistic equations) and its value is slightly higher than the light velocity. This will preserve Einstein’s opus magnum.

    The next questions become:
    1. Why does the light propagate a bit slower than c?
    2. Why do neutrinos fly faster than photons?
    A conjecture. Assume that the cause lies in the fact that photons are slowed down by an un-accounted electromagnetic coupling with the quantum vacuum: the speed of light is c* < c. Being electrically neutral, neutrinos should not be affected by this effect and this could explain the results of the OPERA experiment(they travel almost at c speed). Quantum corrections for the electromagnetic interaction are typically calculated by perturbation method in which the fine structure constant alpha appears (alpha ~ 1/137). Interestingly, if one assumes that the results of the OPERA experiment exhibited a second-order effect in alpha, then its order of magnitude should be about ~ alpha squared = (1/137)^2 ~ 5e-5. Interestingly enough, this value is close to the relative difference between the neutrino velocity and the speed of light as reported in the paper from the OPERA team: 2.48e-5.
    Is this agreement in order of magnitude just a numerical coincidence or is this pointing to something new?

  21. Thank you for raising an excellent point about mixing statistical and systematic errors while “adding errors in quadrature”. This is plain wrong and I am very much surprised that it is being done by the authors. Yet from looking at numerous blogs and responses you seem to be the only one who pointed it out right away.

    Strictly speaking even in the case of statistical errors alone, “adding errors in quadrature” is justified only when it shown that distribution of each source of an error is normal and their correlation is zero.

    It seems that everybody is considering only two options, that these results are either right or wrong, while we may remain in current ambiguous grey area for some time, even with more experiments carried out.

    Thank you again for pointing out that “… when evaluating such an exceptional claim; we need exceptional confidence”.

  22. Dear Matt,
    you might want to have a look to a (short) recent review article on neutrinos and Lorentz violation that summarizes the work of Kostelecky and collaborators, and where explanations of what Lorentz violation is and why neutrinos can be superluminal are presented:

    http://arxiv.org/abs/arXiv:1109.4620

  23. Everyone is speculating about either the theory or the metrology. It would be interesting to consider other straightforward extensions or refinements of the OPERA experiment that would both test the reproducibility of the OPERA result as well as cast additional “light” on the matter.

    One that springs to mind is to do an nuetrino version of the Michelson-Morley experiment. So for example detect the signal along two orthogonal directions over the same path length ( for convenience). Now one could also use coincidence detection , implementing a form of interferometry.
    One might also consider correlating the spin statistics to check for entanglement , assuming the technology exist to perform such measurements.

  24. Assume that the Lorentz transformations apply, but with a speed c’ > c, with c the measured speed of light in vacuum. With c’ = 1.00001 * c, the speed of light from a star would vary by 2 parts in a billion as the earth moved towards or away from the star, if I haven’t goofed up on my arithmetic. I think this would be detectable, I would imagine there is already some bound on this.

  25. @Giotis. I think thats a very good explanation (and much better than the way SR is often taught, except for one thing, where you say “Maxwell has tricked you”. The only trick open to Maxwell is if light does not propagate in the vacuum and/or light is now a wave. It follows immediately from relativity that any wave propagating in the vacuum [ie in the fabric of space time] must lorentz invariant velocity – else there is a preferred reference frame.

  26. Yes, Ok. I must confess I was just trying to bounce ideas, I was not thinking of the layperson at all. If this is for laypeople they should be aware that useful ideas are often not formed in conservative conversations where everyone is very careful and correct, but in provocative (I mean theoretically not personally) discussions.

  27. My understanding is that the pillar of SR is not some special property of light but the fact that all inertial frames are equivalent for physics. In the Lorentz Transformations which follow from this fundamental principle you find that there is a speed like parameter (let’s call it K) which must be constant for all inertial observers; the problem is to find the value of that parameter and fix the LT. You know that it can’t be infinite because then you go back to Newton and Galilean transformations. So since it can’t be infinite you set off to find the value of K. By definition and by your fundamental principle (not by causality arguments) this value must be an upper bound for speed and theoretically it could be as close as you like to infinite but not infinite.

    Now in order your theory to have a meaning this bound must be reached at least in principle by a physical phenomenon in Nature; such a phenomenon (a phenomenon whose speed in vacuum is constant for all inertial frames) will then fix the K in your LT uniquely, will set a speed limit in Nature and set the boundaries of causality. All inertial observers contacting experiments should in principle be able to measure that speed and agree on its value.

    So you have set your prerequisites for the phenomenon and you wait for suggestions. Here comes Maxwell (and experiment) and says I know such a phenomenon, it is the propagation of light in vacuum. Einstein replies ok then, if this is true then my upper bound is reached in Nature by light propagating in vacuum. So I will use the speed of light to fix the K in my LT and I will call it c from now on instead of K since I know that such phenomenon uniquely fixes my LT.

    Thus if there is a speed greater than c it could mean two things, either that Maxwell (and experiment) has tricked you and thus your K remains finite but still undetermined (in that case you hope for another physical phenomenon that could fix your K) or that your fundamental principle is wrong and there is a privileged reference frame. In that sense my understanding is that you can’t exceed that bound (the K) if the fundamental principle of SR is right. K (what ever its value might be) fixes the LT, sets the boundaries of causality and automatically constitutes an upper bound for speed. Causality is a prediction bound to SR, you can’t have SR without causality.

  28. If the results are correct, it doesn’t follow at all that there have to be any causality violations or a possibility to send information into the past. There is, last but not least, the straightforward theoretical possibility of a preferred frame.

    For various completely independent arguments in favour of a preferred frame see my homepage. I think the cell lattice model which allows to explain the particle content of the SM is at least worth some consideration: It requires conceptually a preferred frame.

    Independent of my own theories, the concept of GR as an effective field theory is quite well accepted. But it also implies conceptually that GR is only an approximation of some unknown underlying theory.

    So, relativity will, of course, survive. Possibly only as an approximation. But this is the usual fate of physical theories, and it doesn’t mean “Einstein was wrong”.

    What will not survive are only particular metaphysical interpretations, which depend on Lorentz symmetry being exact even on the most fundamental level.

  29. what are the energies of the neutrinos from the 1987 supernova and the OPERA experiment? /If/ neutrinos were tachyons, then they would go faster the lower their energy…

  30. I’m not a physicist but I’m wondering how the 730 kilometers is measured so precisely? Since the neutrinos are traveling through physical rock, couldn’t even the most miniscule geological event cause this distance to change? I assume the 730 kilometer distance is re-calibrated, and found exact before each test.
    How much would the 730 kilometer distance have to change to obtain the neutrino speed that’s been calculated in the OPERA experiments?

    Please don’t throw this out because I’m less than a rank amateur. I really would like to know.

  31. I respect both Coleman and Glashow very much (the latter now appears to have been a bit hard done by in having to share his Nobel prize), but I was not aware of this paper. Having just read it, there are things that I have difficulty accepting. For example the statement “[assume] the speed of light c differs from the maximum attainable speed of a material body”. Clearly this implies they assume the photons massless. But then you can take the limit m->0 for all material partricles. They are therefore suggesting either physics is discontinuous in this limit or that light is somehow special. It is very hard to see the reason for such an unappealing idea. It destroys the relativity principle independent of scale. And SR is about the structure of space-time not about light, so I dont think its an attack at the right point

    1. I am sympathetic to your aesthetic concerns, to a degree. But in suggesting Coleman and Glashow I was merely pointing out that a statement you had made in your previous comment (that one would necessarily have to consider massive photons) was not automatic. More generally, I am suggesting we slow down and think hard… especially before making statements in a public forum. The poor layperson has a tough enough time without being further confused by physicists’ misstatements. So let’s all try to be a bit more cautious.

  32. On the statistics side, slides 74, 79, 80 of the webcast presented by Dario Autiero (http://cdsweb.cern.ch/record/1384486) looks terribly unconvincing for a 6 sigma result — I would have expected a much more visually noticeable difference. The fits on the top graphs in the slides aren’t that bad, they look more like 1 sigma deviation from expected, not 6 sigma?

    1. Hey! 6-Sigma is a good result and 1-Sigma is a bad result.
      6-Sigma means not that the measured data deviates of 6 sigma from the theory (fit). It’s the other way around.
      6-Sigma means, that the measured result, that Neutrinos are faster than Photons is to 99.99966% true.
      More precise: They measured 60 ns +- 10 ns. Mean=60 ns, Sigma=10 ns Which means that 68% probability the true neutrino-photon-time-discrepancy lies within the interval [50, 70] ns. (Additionally with 16% probability the true value is smaller than 50 ns which would include the anti-hypothesis (neutrinos have the same velocity as photons)).
      This Six-Sigma-Signal-Thing means now, that the measured value (the mean) is 6 sigmas away from the anti-hypothesis.
      Whereas I don’t really understand why it is a 6-Sigma-Signal. I would call it (if at all) a 5-Sigma-Signal.
      To take the anti-hypothesis for true, they should have measured 0 ns +- 10 ns and this interval is only 5 sigma away from the measured mean.

  33. I had a spirited exchange with L. Motl on Vixra blog. He & Daniel are calculating 393 m & 343 m, respectively. This would account for a WAY bigger disparity in delta-T (which their result is NOT, Lubos & I agree that OPERA didn’t make this mistake). I don’t know where T. Larrson is coming up with 22m as difference in “linear chord” & “curved geodesy” (see below URL).

    I discovered the following link, which seems to imply they didn’t make this mistake:

    http://www.cnrs.fr/fr/pdf/neutrino.pdf

    [ this figure doesn’t appear in Arxiv paper, but it should..to avoid reader confusion ]

    I contacted 2 of the co-authors (Dario Autiero & Henri Pessard), & got an immediate response. It seems as if they are aware that GPS returns 3D coordinates. The paper also has a “figure 7”, which shows 3 graphs (Up [ elevation ], E [ longitude ], N [ latitude ]). Note that the Arxiv paper does an additive calculation for the actual D (distance):

    The baseline considered for the measurement of the neutrino velocity is then the sum of
    the (730534.61 ± 0.20) m [ 730.534 km, stated in the .pdf above ] between the CNGS target focal point and the origin of the OPERA
    detector reference frame, and the (743.391 ± 0.002) m [ 743 m,] between the BCT and the focal point, i.e.
    (731278.0 ± 0.2) m. [ 731.28 km the magic number distance ]

  34. All of my investigations seem to point to the conclusion that they are small particles, each carrying so small a charge that we are justified in calling them neutrons. They move with great velocity, exceeding that of light – Nikola Tesla 1932

    Experimental tests of Bell inequality have shown that microscopic causality must be violated, so there must be faster than light travel. [Abridged by host]

    1. I’m sorry, but this statement is false. The Bell inequality [a subtle feature of quantum mechanics] does not imply that any objects travel faster than light. It implies only that correlations between objects are not stored locally.

  35. Just one analogy.

    I (170 lbs) and a groundhog (2 lbs) are going through exactly ten feet distance underground (by digging a tunnel of each). My 10 feet distance can be 100 times longer than groundhog’s.

    Thus, the 730 kilometer distance measured precisely with metrology could be a bit longer than neutrino’s own measurement, as it sees no mass around, having an advantage even better than groundhog’s.

    1. I actually agree with this. The metrologists are just as much under scrutiny here as the physicists; the measurement certainly requires a metrology peer review and if for some reason I were asked to review this experiment I would recommend this. However it needs a physics peer review also, because some aspects of the measurement clearly involve the neutrino beam, how it was created and monitored, etc.

  36. A follow up on my previous post:
    Since there are multiple front end cards and thus multiple of these local 100 MHz clocks, the sum of their errors can cancel out quite a bit of the error. For the minus 60ns discrepancy that is reported, that works out for the average 100 MHz clock sources to have an error of -0.2ppm.

    So I’ve looked around the internet and the best source of information I could find on this clock source is from this paper: http://arxiv.org/abs/0906.1494

    It describes the 100 MHz clock source as a local clock on a board called the “Ethernet Controller Mezzanine” (ECM) and the picture in the paper shows that the mezzanine board to have a simple crystal oscillator(!). Those type oscillators are commonly used in Ethernet communications and are usually specified around +/-50ppm, but in practice usually have much better performance. Summing the local clock errors for multiple of these clocks could easily result in -.2 ppm that would be needed to account for the result of -60.7 ns.

    There is an easy test to see if this is an error term, if the raw data exists (which it may not). With the raw time stamp data (fine counter and course counter data), bin the 16,111 detected events into two bins: Those detected events that have a time stamp with fine counter of less than 300 milliseconds and those that are greater than 300 milliseconds. The average detected event arrival in each bin (assuming a flat PDF of arrivals) would be 150 milliseconds and 450 milliseconds respectively and each bin would have approximately 8000 events. The 0-300 millisecond bin would have a new “early arrival time discrepancy” of 30ns and the 300-600 millisecond bin would be 90ns. Of course, with less samples in each bin, the statistical error would grow, but it would still be enough to show whether this is the culprit.

  37. Hi Matt !

    Could we be looking at an incidence of ‘ backwards causation ‘ here ?
    If so, there’s no need to modify the theory of relativity !

    1. There’s nothing in the experiment that would point us to do something as radical as drop our understanding of causality. All we know is that neutrinos arrived before they were expected.

  38. I am not a physicist, but I did read the paper and I am curious on one particular item that I did not see documented and would like someone to explain: The 100 MHz clock source accuracy that is the time base for the “fine counter” in the OPERA Detector FPGA on the front end card of the Target Tracker(TT). The fine counter is slaved to the highly accurate OPERA master clock which increments a “course counter” every 0.6 seconds. The fine counter is reset every 0.6 s by the arrival of the master clock signal that also increments the coarse counter.

    The paper accounts for the delays in resetting the fine counter and the quantization error of the FPGA’s 100 MHz clock (10ns period) appropriately. However, I don’t see the 100 MHz clock source accuracy described. Even if the clock source to the fine counter is a temperature and voltage compensated oscillator of +/- 1 ppm, that would mean an average error of up to +/-300 ns could exist for the given clock source (average arrival time of 0.3 seconds from the master clock pulse which is up to 30 100 MHz clock pulses off at 1ppm).

  39. I think separation 1) and 2) is hardly possible. That would mean photons are massive, which also means, since we are bombarded by them from every part of the universe and they are really easy to detect (unlike neutrinos), we should have observed these photons from somewhere travelling much slower than the speed of light (the speed of your car 😉 ). If the speed of light (in eucledian space) is not lorentz invariant it will have to be under conditions far removed from our everyday, such as at very high energies (small distances) or perhaps scales the size of the universe (where it would actually correspond to a breakdown of GR). I think the statistical nature of these results is at fault, and watch my word, they’ll go the way of the fifth force and cold fusion.

    1. Separation of (1) and (2) is, among other things, the basis for interesting papers by Sidney Coleman and Sheldon Glashow, two of the previous generation’s most important scientists. And they do not consider massive photons.

  40. So … has everybody caught where they goofed yet?

    It is an easy one. According to the paper the distance measurement procedure use the geodetic distance in the ETRF2000 (ITRF2000) system as given by some standard routine. The european GPS ITRF2000 system is used for geodesy, navigation, et cetera and is conveniently based on the geode.

    I get the difference between measuring distance along an Earth radius perfect sphere (roughly the geode) and measuring the distance of travel, for neutrinos the chord through the Earth, as 22 m over 730 km. A near light speed beam would appear to arrive ~ 60 ns early, give or take.

    Of course, they have had a whole team on this for 2 years, so it is unlikely they goofed. But it is at least possible. I read the paper, and I don’t see the explicit conversion between the geodesic distance and the travel distance anywhere.

    Unfortunately the technical details of the system and the routine used to give distance from position is too much to check this quickly. But the difference is a curious coincidence with the discrepancy against well established relativity.

    1. http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf
      – this shows some more details on the distance calculation;
      I myself suspected using of a loxodrome somewhere in their
      calculations, but it was not the case, as I have seen there –
      they convert all coordinates to Cartesian ones and compute
      their differences to get the distance.
      Also, I hope geodetic GPS-es compute corrections for Earth
      atmosphere, and GPS satellites compute corrections for their
      signal propagation in space where solar wind may slow it?
      If not, we have incorrect size of Earth and all distances on it.

      Seems no one mentioned Sagnac effect: while these neutrinos
      are traveling, the Earth rotates and Gran Sasso position changes
      – since GS is south-east of CERN, and Earth rotates east-wise,
      these neutrinos must go longer way than the CERN-GS distance
      (note global time uses reference system without Earth rotation);
      however, its contribution is below 1ppm, less than statistical error.
      General Relativity frame dragging is much smaller yet…

  41. My earlier comments on the effects of spill size and the decay time in production proceeded from the notion that the claimed effect was a few nanoseconds. I would find it hard to justify an uncertainty in those processes large enough to cover a 60-nanosecond discrepancy.

    I have my own wild speculation, based on gravitational anomalies: not the gauge-theoretical kind, but geological. \vec{g} varies in direction as well as size in the vicinity of large gravitating objects, and although this was undoubtedly taken into account in calculating the distance, I would simply say that if this result stands, the surveyor should get some share of the credit.

    Ryan Rohm

  42. Mark, nice comments about the experiment but I think I am going to have to disagree with you on what you say about the theory side. The amount by which the neutrinos have been measured to surpass the speed of light is much larger than anything you would get from acceptable corrections to the theory of relativity. It has to be some kind of FTL travel within the broad principles of relativity. Either particles behaving like tachyons or travelling through wormholes. This has to imply the possibility of time travel in some form if the result is right. In fact the neutrinos would have been travelling back in time in the reference frame of the protons that created them. From the point of view of someone travelling with the original proton they were created in OPERA and returned to CERN to meet the decaying muons. If you know a theory that avoids that possibility and is consistent with relativistic dynamics as tested at collider energies or even some argument that such a theory could be possible, I am interested to hear it. I think the theoretical violations of Lorentz symmetry that you refer to are much smaller effects and have already been ruled out in some forms by Fermi gamma ray observations.

    1. Thanks for your comment. Your words sound fancy, but as you know I draw conclusions based on equations. As of yet I don’t know any reason to agree with you, but my mind is open. Could you provide a reference to an article which shows there are no loopholes in your argument?

    2. Matt (sorry I called your Mark before oops) I think the onus to produce a reference is on the other side. You say that dramatic claims are not justified but I would say that one of four possibilities has to hold.
      (1) They made an error. This could mean some embarrassing mistake at one point of the experiment or an unlucky combination of errors over several steps all leading in the same direction
      (2) FTL travel and communication with the past is possible. This would be the conclusion if we accept that relativity and the Lorentz transformations are correct at the velocities seen in particle accelerators.
      (3) Einstein was wrong. If relativity fails at such velocities then it really is wrong and we have to go back to the drawing board.
      (4) We face a major paradism shift. The only other way out is if there is a wrong assumption in the way we think about these things and it is of a nature that we can’t really see it until experiment forces us to rethink our basics. This has happened before but not often.

      If you think there is a less radical solution that does not justify at least one of these headlines then I think a reference for that is needed. I know Ellis wrote a nice paper about the scales of violation of Lorentz invariance that led to this measurement but it was a paper that deliberately avoids questions about the details of what kind of theory would be required to provide a result at these scales.

      By the way I don’t mean to be confrontational. Your work on this blog has been top class and i have learnt a lot from reading it. I just think this point is an interesting one to argue about 🙂

      1. Very good. Well, we both made dramatic statements; but I would say they are of slightly different character. You say there is no way out without allowing apparent non-causal behavior, which was a dramatic statement which requires proof that there is no way out; I say only that we don’t know that, which is easier to prove, especially if I simplify it to “I don’t know that”. I know of no evidence yet that Einstein’s theory, which combined with quantum field theory so beautifully describes thousands of experiments, cannot be modified in such a way as to accommodate this result. But I am open to evidence.

        I did spend part of yesterday discussing the results with several young scientists from Harvard and MIT, and we noted several technical problems reconciling this result with other experimental results and with existing theory. I’ll summarize those soon.

        But it’s quite a jump to saying there’s no universal speed limit, that backward-time-communication is possible, or that the long-cherished causality that is built into that combination of Einstein and quantum field theory must be entirely dropped. Certainly this experiment gives no direct evidence of that; it just measures the speed of one set of particles, at one set of energies. And then there’s the supernova observation, which is pretty clean, in my view. So let’s deal in firm equations and not in wild speculations, and not confuse the public by tossing crazy ideas around.

        Papers one should be reading include summary reviews by Alan Kostelecky and his collaborators. There is also some interesting work by Coleman and Glashow from the 90s, though I think in fact it cannot explain the current observation.

  43. Given the nature of the finding (let us for the moment, assume, it persists) what could be a better testing ground for the phenomenon than the neutrino system?

    1. I think the neutrino system is actually ideal; but one would want a wide variety of experiments. There are different neutrino beams that could be used, with different mixtures of neutrino types; they could be run at different energies, with measurements made on different distance scales, and using beams traveling through either air or earth. This would help nail down the nature of the effect.

      The problem with using anything else to study this is that we already have strong constraints on the motions of charged particles. If electrons or muons traveled faster than the speed of light, they would rapidly lose their energy through what is known as Cerenkov radiation (which I need to write about in a future article.) Signs of this effect would show up in cosmic rays, depleting the numbers of cosmic rays observed at very high energy… but no such depletion is observed.

      So neutrinos and neutrons might well the only long-lived particles that could travel faster than light and yet be consistent with data. I have a feeling that making these measurements with neutrons might be tough.

  44. Hello Dr.Matt,
    If the results of this experiment is concerned, can we redefine causality in terms of neutrino(if it turns out for example that neutrino has the universal speed limit?

  45. I like your separation of points 1 and 2 above.
    1. There is a universal speed limit.
    2. Light travels at this speed limit.

    The more I think about it, the more it seems counter-intuitive that photons should travel at the speed limit.

    Given a photon traveling through the universe is partially composed of virtual electron-positron pairs, which presumably don’t travel at the speed of light, why is it we think that photons should travel at the speed limit ? It should be (the speed of light – some small epsilon from virtual corrections), right ?

    Whereas neutrinos only interact weakly, so they would not have an electron-positron contribution or other component of virtual electromagnetic particles, and therefore not gain mass from this virtual electromagnetic component. Neutrinos may travel (the speed of light – some small epsilon correction for their non-zero mass).

    Depending on the balancing of these two small epsilons, maybe the photon is faster, maybe the neutrino is faster.

    1. Interactions don’t give the photon a mass. This is abundantly clear from the classic 1-loop computations of QED.

    2. I see where you’re trying to go with this thinking, but I’m not optimistic. See if you can turn it into equations first, and we’ll talk. In particular, your intuition that quantum fluctuations of the vacuum — which you referred to via the alternative name of “electron-positron pairs” — would slow photons down is not correct; the equations do not agree with you. I can show you details in the “Technical Zone” section of this website, if you like.

Leave a Reply to Matt StrasslerCancel reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC

Related

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/05/2024

Hello and welcome! On this site devoted to the excitement and meaning of science, especially of particle physics and astronomy, you’ll find To learn about

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 09/05/2022