Ok, thanks to a commenter (Titus) to this morning’s post, I learned of information available in the German language press that is vastly superior to anything I had seen previously— much better than today’s New York Times article, because it contains detailed and extensive quotations from a participating scientist on OPERA. [There is nothing like (nearly) first-hand information!] Here is the link:
You can try Google translate and it isn’t awful, but it does contain some important mistakes; my German is good enough to read some more of it but not good enough to do a proper translation for you. I encourage someone fluent to help us out with a proper translation. Someone has done so — see the first comment. [Thanks to the translator!] [2/26 UPDATE: New comment on the situation added at end of post.]
Facts that one can glean from this article bring me to the following conclusions:
As I suspected all along (and contrary to the original press reports), the OPERA experimenters do not claim to have found the cause of the 60 nanosecond shift in the timing of the neutrinos. They only have suspect causes. This is not the same thing. The mystery is not over yet (though it has probably entered its final chapter.) [UPDATE: In fact, there’s something very weird here. See comment at the end of this post.]
The situation is complex, because they know of two problems, one (with an oscillator) decreasing the shift and one (with a fiber) increasing it. We now know (thanks to the German article) that both of these effects are tens of nanoseconds in size, and one (the one involving the famous faulty fiber) could be as large as 100 nanoseconds. That means that in the language of yesterday afternoon’s post, whose Figure is reproduced here, the situation with OPERA is perhaps describable as case (d), perhaps as case (e). [The difference is whether you take your knowledge of what may have gone wrong as a large systematic uncertainty — giving (d) — or you say that you don’t know enough to do that properly — which gives you (e). They seem to be somewhere in between right now; they know enough to give something like (d), but are focused on reducing the uncertainties down toward (c) or better before they give a quantitative statement.] In short, it is now confirmed that there currently is no known discrepancy — not even a weak one — with Einstein’s theory of relativity.
Despite all of the jokes and snide remarks, it sounds as though the problem with the fiber was quite difficult to find, and was a really nasty one: it depended very sensitively on exactly how the fiber
was screwed in [thanks to improved translation] was oriented , which means a defective fiber or a defective connection, I guess. We’re not just talking about an ordinary loose wire. This is not inconsistent with what my experimental colleagues tell me about looking for sources of electronics problems; in particular, note the comment (highlighted in red) from this morning’s post. [Also note some of the comments coming in; we’re still learning about whether this failure mode was a subtle one or would have occurred early on to someone with the right expertise.]
The experimenters believe that over the coming weeks they can, with some considerable effort, nail down to some degree how big these two effects were. If they succeed, perhaps they will be able to bring case (d)/(e), which is what we are currently dealing with, back closer to case (a) or (b), or at least case (c). We’ll see what they say then. But no one, not even OPERA, will be very confident in the result at that point.
OPERA cannot be absolutely sure that they have found the cause of the timing shift in the neutrinos — and that there are no other major problems — until they rerun the experiment, to see whether fixing the two problems changes last year’s result by just about 60 nanoseconds. Apparently this is not something they can tell post facto just by looking at the way the fiber and the oscillator behave now; it is still not entirely clear why, but surely the devil here is in the details, and we won’t get those from press articles.
Apparently there are people within OPERA who argued back in September that the result was not ready for public presentation. They are vindicated now.
Several experiments (OPERA, ICARUS, BOREXINO, and apparently another one I didn’t know about, LVD) will all attempt the neutrino speed measurement, independently but simultaneously, this spring. (The neutrino beam is very wide when it arrives at the Gran Sasso laboratory, and all of the experiments sit inside it.) Hopefully OPERA will have eliminated its problems, and the other three experiments will not make any mistakes, and all four experiments will get the same answer. Of course these are very hard measurements, so they might not get the same answer. We’ll see. If they all find no shift, then we’re all happy with Einstein and the story is over. If only OPERA finds a shift, and it is still 60 nanoseconds (or comparable), then we’ll know the OPERA folks still haven’t found the source of their problem. If all the experiments find the same non-zero shift, we’ll start talking about Einstein and relativity again… but don’t hold your breath.
UPDATE: There’s something that’s been bothering me all along, and although I alluded to it in an earlier post, a commenter’s question prompted me to write it down more carefully. This issue must also be bothering the OPERA people a lot.
The point is this: there is no reason that the two problems that OPERA has identified — that an oscillator in the main clock was off from what it had been measured to be at some earlier point — and that the exact orientation of the optical fiber bringing in the GPS timing signal could change the amount of light entering the optical-to-electrical conversion system and somehow [they know how, but I don’t] induce a delay in the electrical timing signal exiting the system — should be constant in time, at least not over years. They measured the oscillator at some point and it was fine; now it isn’t. When did it shift? And can we really expect that the orientation of a cable would remain fixed over the three years during which the original OPERA experiment was carried out? A little bit of maintenance, or even just a little settling of other wires nearby, would easily lead to a change in the cable’s orientation.
The mysterious thing is that the two versions of the OPERA speed measurement — the one with long pulses that was done over 2009, 2010, and 2011, and the one with short pulses done over two weeks in October 2011 (which I called OPERA-1 and OPERA-2 to distinguish them) — gave the same answer, about 60 nanoseconds early arrival. But if the problem came from a slowly drifting or suddenly shifting clock combined with a fiber that slowly drooped or got jostled at some point, you’d expect OPERA-1 to have an average of different timing shifts as the oscillator and fiber changed from 2009 to 2011, and OPERA-2 to have a single timing shift that represented what the oscillator and fiber were doing in October 2011. There’s no reason the average timing shift over three years should be the same as the final timing shift, unless the timing shift was extremely stable — but the two problems they’ve identified suggest instability would have been expected.
This certainly has me [and some commenters] wondering (and I am sure it has the OPERA people wondering) whether they’ve actually found the main problem.
59 thoughts on “Finally An OPERA Plot that Makes Some Sense”
Hopefully a little bit better than google translator bot.
Measurement errors could explain FTL
Q. Ms. Hagner, two possible error sources were named in the short notice sent out yesterday, which potentially shed new light on the measurements from last year. One source ist a titled plug, another one is a correction with the OPERA master clock:
A. Opposite to current reports that the GPS wasn’t working, both potiential error source are located in the electronics before the actual OPERA detector. We have a GPS receiver in the lab on the surface. From there a 8km long fiber optic cable is run into the Gran-Sasso-Tunnel to the lab with the OPERA detector. The glass fiber cable is connected to a little box which converts the optical signals into electronic ones. Then comes the OPERA master clock down in the tunnel right before the detector.
Q. Was the optical cable screwed in crooked/tilted/slanted?
A. With some detective work and some luck we noticed that it makes a different how the cable was attached. Depending on the position of the plug the translated signal could be delayed. We detected that if the cable is tilted just a little bit from the ideal position the little box only receives parts of the signal. Depending on the input signal’s amplitude the delay can be up to 100ns.
Q. So the crooked plug could cause the neutrinos to move slower than actually measured.
A. Yes. We do not know how crooked the plug actually was at the time of our measurements last year. Sub-sequentially we do not know the actual time delay. Currently we can only judge the magnitude of the effect. We are working on calculating the plug’s effect based on other data from the detector. That will take some time. The plug could indeed be the cause for the FTL neutrinos.
Q. But there is also the main clock behind the small box that needs to be corrected.
A. That is the main clock containing an oscillator. We saw that the actual value of the main clock differs from a value measured earlier. We need to correct that. But that correct would speed up the neutrinos even more. At the moment it appears as if both effects together could explain the measured difference of 60ns. Then we would measure the exact time of arrival as if the neutrinos were to move with light speed.
Q. How big are the projected measurement errors?
A. The analysis is ongoing. We expect to quantify the measurement error within a couple weeks. In any case we need to remeasure with a new neutrino beam. That’s obvious.
Q. And this should happen in May?
A. Yes. Then we also can use the other neutrino detectors – Borexino, LVD, Icarus – also located in the Gran-Sasso-Tunnel for the fight time measurements. That would be an opportunity to compare all results directly. We will have up to 4 different time measurement systems. This is importent for the credibility. We need to measure with maximum precision and redundance.
Q. What if the measurements yield the same results?
A. We cannot be satisfied even if the effect of FTL neutrinos were to vanish. We need to search for further possible error sources. We need to check the GPS and distance between source and receiver. We haven’t completed our check list yet. Hence we do not know if there are more potential error sources.
Q. How do you handle the glee?
A. We’ll see. Of course we are asked why we published so early. Some OPERA operators warned of that.
Q. It is unusal to admit measurement errors.
A. We pondered for a long time if we should issue this announcement. The majority of the collaborators decided in favor. It is important to us to find our own potential errors before other experiments point them out. This is about credibility and being able to critique yourself. Above all this is about the crediblity of other OPERA measurements – for instance the neutrino oscillation measurements with which we want to prove that artificially created neutrinos change their identitity.
Q. Ms. Hagner, I appreciate the interview.
Interview by Manfred Lindinger.
Thank you, Michael!!!
These blog post concerning the OPERA result is wonderfull Prof. Strassler.
Thank you very much !
Is the OPERA team justified in their analysis that totally ignores Milgrom’s MOND? Milgrom’s MOND is, according to McGaugh and Kroupa, empirically correct whenever it makes testable predictions.
On pages 83 and 84 of Einstein’s “The Meaning of Relativity”, there are 3 fundamental conditions for the components of Einstein’s tensor of the gravitational potential. The first condition is the tensor must contain no differential coefficients of the Fundamental Tensor components of greater than second degree. The second condition is that the tensor must be linear in these Fundamental Tensor components of second degree or less. The third condition is that the divergence of the tensor must vanish identically. The first two conditions are necessary to derive Newton’s theory of the gravitational potential in the non-relativistic limit. The third condition is necessary to eliminate energy gains or losses from alternate universes. But does dark matter consist of gravitational energy that seems to derive from alternate universes? Consider the following:
Two Button Hypothesis of General Relativity Theory: In terms of quantum gravitational theory, Einstein’s general relativity theory (GRT) is like a machine with two buttons: the “dark energy” button and the “dark matter” button. The dark energy button is off when the cosmological constant is zero and on when the cosmological constant is nonzero. The dark matter button is off when -1/2 indicates the mass-energy divergence is zero and on when -1/2 + sqrt((60±10)/4) * 10**-5 indicates the mass-energy divergence is nonzero.
Prof. Fernández-Rañada had the idea that the Pioneer anomaly can be explained by anomalous gravitational acceleration of clocks.
If the preceding “Two Button Hypothesis” is correct, then the Pioneer anomaly data and a scaling argument for Milgrom’s Law give the Rañada-Milgrom effect, which seems to explain the OPERA neutrino anomaly.
Besides the extensive Hagner interview, there is this short BBC radio interview with OPERA spokesman Ereditato. As far as I can see, this is generally in agreement with the much more detailed Hagner statement.
Ereditato: “There are two effects: One of the two effects would give an increase of the speed. The other one should give a decrease. And this decrease is probably larger than the increase. So this would tend to modify in a more serious or less serious way. In addition to the absolute modification there would be the uncertainty of this measurement. So I think that the best way – we cannot give now a quantitative estimate – the only thing we can say is that: We are concerned about this possibility, and we will need a neutrino measurement again.”
Q: “I have seen that one of the errors could be as large as 60ns.”
Ereditato: “Not the error. The amount of the effect can be 60ns. So this would completely wash out the effect that we measure. But we don’t know if it is 60, 40, 30, 20. We at the moment, we have not prove clear evidence of how much is this effect. We are concerned about the potential effect of this.”
Some sources can also be found in the (more or less reliable) Wikipedia article
I think it is clear that we have more or less the right picture now; everything fits, and all the statements we’re reading are consistent with each other. Thanks for your invaluable help.
I heard that the MINOS experiment was going to be revivied but I haven’t heard any more on that front recently.
Interestingly, MINOS seems to have had trouble with the fibre optics as well. Their final error was mainly systematic. Perhaps more thought needs to be given to the communication channels in high precision experiments?
LOTS of thought is given to the communication channels; timing is everything in high-precision electronics.
I don’t know what is up with MINOS. Perhaps they too are mired in some systematic uncertainties that they don’t fully understand.
It is now very certain that the OPERA data is contaminated with system errors. Is there a small chance that the LHC data for the light Higgs also having some system errors. Has anyone ever looking into this possibility? After all there is enough difference between the two experiences at LHC. Furthermore, the CDF Collaboration has very much excluded a light Higgs in their 10 fb-1 of data. The following is quoted from CDF’s recent publication.
From: CDF Collaboration
“Search for a Standard Model Higgs Boson Decaying Into Photons
at CDF Using 10.0 fb−1 of Data
The CDF Collaboration
(Dated: January 1, 2012)
A search for the SM Higgs boson in the diphoton decay channel is reported using data corresponding to an integrated luminosity of 10.0 fb−1. We improve upon the previous CDF result by increasing the amount of data included by 43%. No excess is observed in the data over the background prediction and 95% C.L. upper limits are set on the production cross ection times the H → γγ branching fraction for hypothetical Higgs boson masses between 100 and 150 GeV/c2.”
The entire article is available at http://www-cdf.fnal.gov/physics/new/hdg/Results_files/results/hgamgam_dec11/cdf10737_HiggsGamGam10fb.pdf
What is the value for this CDF data now in terms of the light Higgs search?
There can be systematic errors in any measurement, and there is always discussion of that by the scientists involved and those outside. In fact, if you look at all of my posts about the current Higgs search, they all touch on the fact that ATLAS and CMS see peaks in slightly different places, and that these are difficult measurements (1% accuracy is required for a precise mass determination.)
However, beyond that crude level, one cannot compare the OPERA measurement (a 1 in 100,000 precision measurement of a distance and of a time, with no control sample) with the Higgs search (for which the required precision is not so extreme and for which there are many control samples.) The systematic errors in the Higgs search are small enough that, with more data, there is little danger of a false signal. And you can compare two experiments, ATLAS and CMS, for which it is very unlikely indeed that they would both have the same false signal. Moreover, at the moment, the danger of a false signal comes more likely from statistical uncertainties, not systematic ones.
As for CDF, it is a Tevatron experiment and is not really competitive with the LHC experiments for a Higgs particle between about 115 and 130 GeV.
I believe that the best solution is to completely replace all equipments with a new ones and redo the tests. But they need exchange it for others models and brands.
Is the dispersion over 8km fiber ~100ns? Does somebody have clear explanation about physics?
– “a 8km long fiber optic cable is run into the Gran-Sasso-Tunnel to the lab with the OPERA detector”
– “if the cable is tilted just a little bit from the ideal position the little box only receives parts of the signal. Depending on the input signal’s amplitude the delay can be up to 100ns”
I am sure that’s not relevant, though I don’t have a clear explanation yet from an expert. The box that the fiber in question enters converts the optical signal in the fiber to an electrical signal to be used later. The amount of light that enters the box must somehow be involved in how and when the conversion of the signal takes place — in other words, a reduction in the light entering the box must somehow lead to the corresponding electrical signal being delayed. [Clearly, this is not obvious if, like me, you’re not an expert in these devices.] Presumably the OPERA experimentalists had overlooked this possible failure mode for the timing measurement; otherwise they would have checked for it long ago.
That’s right – dispersion in a fiber results in photons arriving at different times – and a bell-type shape of a signal. It should be characterized by some dispersion time. Conversion of a light into electrical signal has a threshold; bad connector should results in less light and hence an electrical signal being delayed. On the other hand bad connector could result in reflection and “echo” signals. Just wanted to confirm that dispersion is the explanation for an error.
If so, dispersion of 100ns should of been the main suspect in time-of-flight experiment when the measured effect is of the same order ~60ns.
I’m a little confused. You wrote that you “Just wanted to confirm that dispersion is the explanation for an error”, but I thought that dispersion is not the explanation for the error, and that a bad connector is responsible.
Can you clarify for non-experts like myself why you get a delay, rather than a misfire or no electrical signal at all, if the input light is diminished?
Sorry, should of been clearer – dispersion of light combined with bad connector could result in additional delay and could be the explanation for an additional delay ~100ns.
I am nog claiming that’s true, and i am not claiming to be an expert, just wanted to get some physics explanation of the 100ns effect.
If the input light is diminishing and electronics that converts the light into electrical signal (ADC) has a threshold then the timing for the electrical signal compared to the front of the light signal will be delayed more with diminishing signal compared with the full signal. The time difference (due to connector) will be of the same order as the dispersion of the light arrival.
We’ll get this straight eventually, I am sure… A little patience and a few more details, that’s all we need.
From the PhD thesis of one of the OPERA members:
on page 59:
“This fiber arrives in a technical room of the underground laboratory where it is connected to a patch panel, which splits the signal to several fibers going to the various experiments. The signal is transmitted every ms and includes the coding of the date. The leading front of the signal corresponds to the start of the ms. After a short period it starts the transmission of the bits containing the coding of the date and the hour till the ms. This coding implies the transmission of 80 bits.”
My guess is that the optical signal is detected with a photo-diode and a comparator and that the optical signal has an amplitude which is very much greater than required for detection. The rise time of the optical signal could be fairly long, say 100 ns, but this wouldn’t matter if the cable were properly aligned with the photo-diode since it would detect the signal very near the beginning of the optical signal. If the cable and photo-diode were improperly aligned then the optical signal seen by the photo-diode would be reduced and the comparator would trigger later. Dispersion in the optical fiber would also cause this, i.e., it would increase the rise time of the optical signal, however, the optical fiber used is single mode which has very low dispersion and could be used to transmit 10’s of GBits/sec over the length of this 8 km fiber, i.e., dispersion << 1 ns. Since the data rate required is only 80 bits per ms, the data could be encoded with pulses with slow edges.
Eric — Thanks! Might I ask — for the benefit of my many non-technical readers (and to some degree for me too), could you try to translate this into even less technical language? Some of this would be hard to follow if you didn’t know quite a bit about electronics.
Matt, hope this helps:
A digital optical to electrical converter is designed to output one of two states (voltages): a zero or a one, corresponding to the incoming light being less than or greater than a given threshold. OPERA uses the time of an optical signal pulse edge (transition from a zero to a one) to indicate the time when a GPS signal was received with high accuracy. To transmit information reliably, the amount of light that is sent to encode the two states is either much less or much greater than the threshold. The converter can be divided into two sections, a first section that converts the amount of incoming light to a corresponding voltage (a photo-diode plus amplifier), and a second section that outputs one of two discrete voltages (interpreted as zero or one) depending on whether the incoming voltage is above or below a threshold voltage (a comparator…basically a fast difference amplifier with high gain…the two output voltages are just the amplifier saturating). All of the components used in constructing the converter have finite response times. Imagine that each photon that impinges on the photo-diode results in a tiny voltage pulse of short duration. The output of the first section (photo-diode plus amplifier) is a temporal average (with some time constant) of these individual photon pulses. Both the photo-diode and the amplifier contribute to the averaging process. Let’s assume that the converter is designed with a threshold of 1000 photons/ns. If the incoming optical signal starts at time 0 with 0 photons/ns and then increases linearly to 100000 photons/ns over a duration of 100 ns (the slope is 1000 photons/ns/ns), the converter will detect that the signal has passed the threshold after 1 ns. If the optical signal is attenuated by a factor of 10, then the slope of the incoming optical signal is 100 photons/ns/ns and the converter will detect that the signal has passed the threshold after 10 ns. The averaging process also has the effect of reducing the slope of the incoming signal which means that a incoming signal of reduced amplitude will pass the trigger threshold later. The averaging time constant could be fairly large, say 100 ns, but is inconsequential when the received incoming signal is not attenuated since it is designed to be much greater than the threshold value.
Answer 6 – I suspect that’s “flight time” unless she’s being subtly political across multiple languages 🙂
How did they get so low statistical error in the first place (<10 ns)? Does not that imply that the systematic error is fairly constant (assuming that the neutrinos travel at constant speed, no matter what the actual speed is)?
You have to be very careful to separate statistical uncertainty (which is due to random fluctuations) from systematic uncertainty (which concerns shifts due to your known unknowns). The total uncertainty of 10 nanoseconds is the combination. But the real killer for experiments is the unknown unknowns. You cannot estimate the uncertainty on your measurement due to those things that you do not know that you do not know about.
It would seem they simply didn’t account for the possible failure mode of a fiber connection generating a time delay. It must have escaped their attention. And so, not knowing about it, they (a) couldn’t check it and fix it before they ran the experiment, and (b) couldn’t put any associated estimate of the size of the potential problem into their estimate of uncertainties. (The second problem, with the oscillator, is even a bit more surprising, since they surely knew they needed to check it… but that’s the problem that, had it been corrected, would have made the neutrinos’ arrival seem even earlier.)
I don’t know what you mean by “systematic error is fairly constant”. Constant with respect to what? And what does that have to do with the neutrino speed being constant? Please re-ask your question more clearly, so I can answer it.
I meant to ask whether the effect of the “unknown unkowns” has been fairly constant over the measurements they have made of the neutrino flight time, meaning that the flight time measurement has shifted for example between 55 and 60 ns (or some other fairly constant amount within a few ns) over all the measurements, and not shifted for example between -20 and 120 ns.
If the case had been latter, should not they have measured larger variations in the flight times, so that even if they did not know the source of the spread of the measurements, it would have been visible in the results they obtained and they could not have ended up of total uncertainty of the order of 10 ns?
Is this correct reasoning?
It is, and this is something that I’ve been puzzling over. In fact it is such an important point that I think I’ll add something into the text of the article.
One question I have is, if I remember correctly, pretty much all the measurements over the past few years and then when they reran the experiment recently seem to have given the same results of around 60ns early arrival time of the neutrinos. How likely is it that both of these errors would have combined to give a remarkable consistent result over many years and many experiments? If you read the interview with Ms. Hagner she states that, “We saw that the actual value of the main clock differs from a value measured earlier.” This statement implies that this value had been measured earlier has has changed from a previous measurement, so if this error were affecting the time measurement of the neutrinos, there should have been some difference in the time measurement during the course of the experiments. With regard to the second potential error, the faulty wiring, she states, “We do not know how crooked the plug actually was at the time of our measurements last year. Sub-sequentially we do not know the actual time delay.” Notice how she says that she does not know how crooked the plug was LAST YEAR. This statement implies one of two things I think: one, either the connection is liable to move from its ideal position for some reason, otherwise if the connection were absolutely secure she would know that whatever the current state of the connection was, is the likely state of the connection during the experiment last year or during the experiment in the previous years; two, the connection has been adjusted or reinserted since last year in which case they would not know whether the connection were the same as when the experiment was run. I think in either case it seems unlikely that the results would have been so consistent throughout the years. I’m not saying I’m convinced that neutrinos travel faster than light, but I’m not convinced that these two errors caused incorrect measurements to be recorded. The results are too consistent throughout the course of the experiment for these errors to have been the cause.
🙂 While you were writing this comment I was writing the identical set of concerns into the post as an update. You’ll see I fully agree; it’s been bothering me too. I am sure it is bothering the OPERA people also, and is part of why they keep saying they are far from sure they’ve found the problem.
From the quotation from Giulia Brunetti’s thesis quoted in Eric Shumard’s comment:
“This fiber arrives in a technical room of the underground laboratory where it is connected to a patch panel, which splits the signal to several fibers going to the various experiments.”
Does this mean that OPERA, ICARUS, BOREXINO, and LVD will all be depending on this one potentially faulty optical fiber / GPS system for the timing when they all simultaneously attempt the neutrino speed measurement this spring?
I’m wondering how large or moveable quartz crystal oscillators of the required precision are, and whether it would be practical for an oscillator to be transported from CERN to Gran Sasso by helicopter or lorry, synchronize at Gran Sasso, take it back to CERN, re-synchronize at CERN, take it back to Gran Sasso, re-synchronize at Gran Sasso, take it back to CERN, re-synchronize at CERN, and so on, preferably hundreds of times, with this exercise going on continuously throughout the duration of the experiment. Then even if there are problems with the stabiltiy of the quartz oscillator while it is being moved around in this way, the multiple repetitions should assist in obtaining a sensible average.
Another point that puzzles me about why this hasn’t been done is that progress in high density integrated circuits tends to make electronics ever more stable and less likely to be perturbed by the relatively small accelerations likely to be encountered in a helicopter or lorry journey. Is it really infeasible nowadays to transport a small crystal oscillator repeatedly over distances of hundreds of kilometers without its timing becoming too perturbed for nanosecond measurements?
If repeated physical transport of crystal oscillators like this is feasible at all, I would have thought it would provide an essential cross-check on the GPS and optical fiber system.
First, the problem is in at the end of the branch of the fiber where it enters OPERA’s optical-to-electrical converter. If the converter for each of the experiments were to have a similar problem, it wouldn’t have identical magnitude, so the four experiments would all get different results.
Second, now that this type of problem is a known problem, it will get proper scrutiny and won’t be repeated.
Of course the question as to whether there could be as-yet unidentified errors in the GPS timing that could affect all the experiments together still is a serious one.
My understanding is that to the extent possible the four experiments are doing the timing and distance measurements independently. Of course they all use the same GPS satellite system, so a certain amount of correlation is unavoidable.
Repeating the measurement, as OPERA is planning to do, having fixed the two known problems, will give insight into whether there are still other problems out there which might indeed be correlated among the experiments.
I assume that the cost of moving clocks back and forth repeatedly is something that would be quite high, and would be avoided unless it became absolutely necessary. You certainly wouldn’t do that in a first measurement. You’d wait until the stakes became high enough (e.g., all four experiments see the same shift and you’re looking for last-gasp reasons to show they’re all wrong.)
I think that there is an ideological point around OPERA. Once a particle as neutrino is supposed to be faster than light a door opens, after neutrinos will come over others particles or even molecules. Is a shocking experience to imagine a bunch of things achieving the light speed. Of course, I´m not convinced that neutrinos travel faster than light. I think that we will need a neutrino measurement again.
Note: Other particles that you mention have speeds that are already constrained by other measurements.
In the case that the neutrino would be faster than light we would see a sort of bending of photons involved in the process. Under the linear perspective light strecht out but from other point of view is not stretching but light bending. It´s look like that neutrino pulls down a jet of photons carrying away as well the timing. I don´t mean that neutrino is faster than photon but in this case it seems to be some gravity action coming from the neutrino affecting the photon. If measurements reveals that neutrino is not faster than light but just as light, then we might consider some restrictions to proof subsequently
that neutrino is as fast as photon with certain conditions. But that point needs some months to be confirmed.
With regard to the cost of a cross check by physically transporting a nanosecond clock backwards and forwards between the labs: my eMachines eM350 netbook, which cost about 160 GB pounds or 250 US dollars, contains an Intel Atom N450 CPU that runs at 1.66 GHz, so it must contain a clock that counts at a rate of about nanoseconds. By using the clock function in the C++ ctime library one can write a C++ program that runs in a loop that repeatedly reads the number of “clock ticks” elapsed since the program was launched, and writes that number to the screen via cout each time it has increased by more than a set value since the last write. The netbook has a video connector that can send its video signal to an external video device, so by using a video cable that is split to its component wires just outside the netbook, and attaching the crocodile clip on the end of a short oscilloscope lead to the appropriate wire, the voltage signal from each screen write could be detected in an oscilloscope with no more than about a 4 ns delay for a 1 metre lead, for comparison with a timing signal similarly extracted from the timer in either lab. The netbook can continue to run while closed, and the battery lasts for about 5 hours, so a student could take a netbook running such a program calibrated at CERN by coach or bus to Gran Sasso, with stops on the way to recharge the battery if need be, cross check at Gran Sasso, then back to CERN, and so on. I think it would be reassuring if the Gran Sasso experiments, and also MINOS, would at least attempt something like this, and tell us how it compared with the GPS timing system.
The oscillator in a PC or most any other piece of electronics has an absolute frequency accuracy of 10’s of parts per million and will drift by 10’s of parts per million with changes in temperature and voltage. This means that after 1 second, a clock based on this oscillator will be off by microseconds. An atomic clock has an accuracy and stability such that it will be off by something on the order of 1 ns per day (this is why OPERA uses GPS to continuously correct for this drift). A continuous cavalcade of grad students or FedEx ferrying atomic clocks between CERN and Gran Sasso could keep things sufficiently synchronized.
As the others have replied already, the issue is not whether you can count nanoseconds at all, but whether you can count 85,000,000,000,000 nanoseconds per day without drifting by more than a nanosecond or so. Ordinary electronics that you can buy in a store don’t stand a chance.
And even with a nanosecond drift, you’d have to take the clocks one way each day. Say you budget a graduate student to do that: that’s $80,000 right there, minimum, counting salary, healthcare, and housing on the visiting end. And then the back and forth 800 km (driving) over 150 times a year would cost you $10,000 in gas, plus a car that can go 80,000 km in a year without needing to be fixed all the time, which means a pretty new vehicle that you won’t easily be able to sell at the end (another $20,000 or more). Incidentals and we’re up to $120,000. You’d better be sure there isn’t a cheaper way or a more reliable way before committing to this.
Chris, sorry, but you’re all confused. The clock on your netbook isn’t even remotely close to stable enough for this purpose.
To work as you’ve described, a clock would have to gain or lose less than (say) 10 ns over the 5 hour period. That means that its rate must change by less than 5e-13 (dimensionless) over that time. (Any initial error in measuring the clock’s rate comes out of that budget, too.)
Now, the average change in a clock’s rate is itself a function of time, and is known as the Allan deviation curve. (The suare of the deviation, namely the Allan vairance, is also often plotted; be sure to do the right conversion.)
It is well known that, for long time intervals, quartz oscillators have essentially random-walk frequency errors. That is, the error in frequency grows as the square root of the elapsed time, and the integrated error in time grows as the 1.5 power of elapsed time.
Here’s an Allan deviation plot for several frequency standards. Note that at tau=1.8e4 seconds (5 hours), a standard high-quality and temperature-compensated but not temperature stabilized oscillator (the TCXO at the top of the plot) has an Allan deviation of more than 2e-9. The oscillator in your laptop is not high quality and due to the complete lack of any temperature compensation will vary by more than 1e-6 between underground and surface temperatures.
Even one of the highest-quality quartz oscillators ever made (labelled “BVA” in the figure, and yes it’s ) can only hold 1e-12 stability for up to about 2000 seconds.
(The “option 08” models, basically the best dozen made in a given year and sold for a high premium, can do somewhat better, but still not good enough.)
Now, the required accuracy can be obtained with cesium oscillators and hydrogen masers. But it’s fiddly.
Here’s an intersting page on a 2005 experiment to measure relativisitic time dilation on a weekend mountain trip. Note that over 48 hours, the 3 cesium clocks drifted apart by 12 nanoseconds.
While I agree with you that the fact that OPERA-1 and OPERA-2 got the same results is very suspicious, doesn’t the fact that they did get the same results pretty much prove that the errors OPERA found are not unstable (although the descriptions of the errors would suggest otherwise)?
No, the fact that they got the same result both times doesn’t prove the two problems found give stable timing shifts. It might just indicate that their effects are smaller than currently feared, and that there’s a bigger, stable shift due to a still unidentified problem.
But indeed, this is the mysterious part of the whole business.
I think the crucial point we were never told until now is that the connector is screwed in, not plugged in. Maybe Autiero and others commented in Italian and the reporters’ translations changed things a bit, but I doubt it.
With a screwed-in connector, an unscrewed-a-bit state is likely a stable state in that most shocks to the rack would not cause its “looseness” to change. The screw threads would hold the cable in place against translational forces (only a rotational torque would advance it).
I wonder, why OPERA wont to repeat neutrino speed measurement. It was not the goal of the experiment and the experiment is not designed to do this. I think just saying the the result was wrong is enough.
But I’ve nowhere read, whether and how it affects their mail goal, measurement of mu neutrinos oscillations to tau neutrinos. They used (AFAIK) the time signal to lower background. If the time is shifted, they may measure the neutrinos in wrong time causing wrong results of their primary experiment. So does this error affect the neutrino oscillations experiment?
Once a group has started an experimental measurement, they want to finish it. It is embarrassing enough to get it wrong; it is even worse to throw up your hands and say “we can’t do it”. As I’ve emphasized, they do not yet even have a result, because (contrary to media reports) they cannot yet be sure they have found the main problem. They will need to re-run the experiment even to be sure that the known problems are the main problems. And presumably they (and the neighboring experiments) can at least obtain the world’s best measurement of neutrino speed at this energy.
The oscillations-to-tau-neutrinos measurement requires timing with much less precision. It won’t be affected by the known problems.
in reply to Matt’s comment:
“Once a group has started an experimental measurement, they want to finish it. It is embarrassing enough to get it wrong; it is even worse to throw up your hands and say “we can’t do it”…”
This is a good reason why withholding or downplaying the superluminal result might have been better judgment than publicizing it.
It’s not what OPERA was designed to investigate, and it requires greater timing precision than the designed experiment required.
I think Pavel has a good point. It would be sound science to have the option of declaring that the experiment was not designed to measure neutrino speed at the accuracy required to verify this peripheral result. Now the OPERA group (and CERN complicitly) seem to be compelled by their own hasty publicity to ‘fix’ the experiment (if possible) to prove their own results wrong – results which neither they nor anyone else seemed to really believe would hold up, and which contradicted not only Einstein but also empirical evidence (1987a) and theoretical evidence (absence of Cherenkov radiation).
While it’s understandable that researchers would want to account for the 60ns discrepancy, it’s at least conceivable that more productive science might be neglected as a result, a potential pitfall that perhaps need not exist if greater restraint was exercised. It seems like part of proper oversight of the publicly-funded CERN, perhaps OPERA as well, would include trying to quell this type of momentum before it became so irresistible. Fair?
Fair. If we concluded from what is happening now that the measurement simply wasn’t within their reach, no matter what they did, it would be a bad idea to spend the money to have them keep trying. But I don’t think that the situation is that OPERA is far from being able to do the job. Making this measurement with an accuracy of 10-15 nanoseconds seems possible, though tricky. And to leave this 60 nanosecond discrepancy unresolved when it can be resolved with a few weeks of neutrino data doesn’t make sense to me. You should also keep in mind that learning how to do this kind of long-distance timing measurement accurately may come in handy for future measurements of other types; that may have benefits you can’t currently foresee.
More interesting factors: moving an atomic clock — or any clock — affects the relative speed at which it ticks. So does moving it up or down in earth’s (actually the earth-moon system’s) gravitational well. Clocks with nanosecond-range accuracy and stability can show such effects. Plus, moving clocks exposes them to environmental variations. Typically, using portable cesium clocks to coordinate time only has an accuracy in about the 1 microsecond range. [All this has had to be taken into account when correlating the various observatories/standards bodies used by the BIPM and IERS to calculate world time standards (TAI and UTC).]
A typical PC’s time will drift several seconds per day, adding up to several minutes over the course of a few weeks. They are notoriously bad clocks. Which is why they need to be time sync-ed regularly with standard internet time servers (there’s a NTP [Network Time Protocol] service built in to every modern operating system to do this). But unpredictable network latency over the internet make it only accurate to within several tens of milliseconds at best; most Windows implementations are only accurate to within about a second of UTC.
The atomic clocks on GPS satellites are continuously monitored and validated against terrestrial time standards; individual clock discrepancies are published, as are the exact ephemerides for each satellite. Relativistic and Sagnac (rotating inertial reference frame) and ionospheric effects can be calculated, so we can now calculate time delays of signals from satellites directly overhead with extreme accuracy. (Satellites nearer the horizon can suffer from unpredictable diffraction and multi-path distortion effects as the signals travel through the atmosphere and ionosphere – a major limiting factor in GPS accuracy.) So, thanks to the U.S. Air Force, GPS has become the defacto standard for accurate time signals around the world, accurate to about 14 nsec. By having two sites simultaneously receiving time signals from the same satellites, the clocks at the two sites can be synchronized to within less than 10 nsec.
And just FYI: the IERS will add a leap second to UTC in the last minute of 30 June 2012, the first since 2009. GPS time is based on TAI and does not have leap seconds (although the GPS messages also report the number of seconds difference between GPS time and UTC).
A couple of ironies in the above:
1) In order to synchronize clocks at two sites with that precision, the positions of their receiving antennas on earth (and even relative to the geoid) must be precisely known. The only practical way to do this is to use GPS.
2) The “extreme accuracy” with which GPS signal travel time is calculated depends upon the assumption that the speed of light in vacuo is constant.
I think you have this a little backwards.
Today the location of your house can be determined using satellites that are in various locations in the sky. If you use your GPS detector tomorrow the satellites will have moved. Yet your GPS device will work tomorrow to find your house in the same place as it is today. This would not be true if anything were wrong with the assumption that the speed of light in vacuo is constant.
The fact that the GPS system works well — well enough for GPS to be in most cell phones and cars today, making it a multi-billion-dollar industry — is a powerful and successful test of the assumption that the speed of light in vacuo is constant.
Eric, Matt, Amy, and WT, thanks a lot for the explanations.
The relative locations of GPS satellites are constantly changing, and both special and general relativistic effects need to be accounted for when doing the calculations. For clock synchronization to work, the travel times of a single signal “chirp” to two different sites need to be subtracted out. Which you can’t do if the speed of propagation in vacuo is not constant. (Note that accurate position calculation requires visibility of multiple satellites, at least 4. And precise determinations use differential phase shift, which requires precise knowledge of satellite position, velocity and time. Time sync only requires the simultaneous observation of a single GPS satellite by the two sites, although accuracy and dependability can be improved using multiple satellites.) I just find it ironic that to test for superluminal neutrinos with sufficient accuracy requires the use of a tool that requires a constant speed of light for it to work.
Sorry, I obviously misunderstood both your point and your background.
But I don’t know if that really is so ironic. Could it have been any other way? Any time you would want to test if something could be superluminal, you would have to compare its speed against light’s speed, one way or another… and the assumption would enter there.
It is definitely ironic. GPS technology is calibrated taking Einstein’s theory of time dilation into account. If OPERA used GPS as a speedometer and got a result which undermined Einstein’s theory…yes that would be ironic. As for comparing the neutrinos’ speed with that of light, that’s just a matter of looking up the value of c in an official book of standard values I suppose.
– “Did we take the speed of light to 11 places or 12?”
– “Actually, no places after decimal point. Berlusconi insisted we use Galileo’s result or nothing.”
– “Yeah, them’s politics.”
– “But remember the myopic Hubble Telescope? And the Mars spacecraft which crashed because someone coded in inches instead of centimeters?”
– “Wasn’t us, was it?”
I am grateful that bloggers and commentators were not continuously informed as I learned these lessons the hard way!
Lesson 1: Calibration is not enough. The calibration needs to be verified with an independent, different type of measurement. This magnitude of error could have been easily detected with a primary frequency standard. Check out leapsecond.com for interesting and amusing application examples of this type of standard.
Lesson 2: Beware of complex, unverified schemes to use something convenient (a GPS clock) in an inconvenient way (far from the antenna.) Comparing the experimental graphs to the accuracy available from a primary frequency standard, I wonder about the whole synchronization approach.
Lesson 3: Measure digital signals with an analog measurement, especially if they are clocks. It is too easy to plug in a digital signal and assume that if it seems to work at all, it is pristine. A quick measurement of the analog signal reveals the garbage when you have this type of problem. If you have to move the connector after the analog measurement, there is still potential for error. Use an optical splitter or switch to allow the measurement without moving the connector.
Lesson 4: Single mode fiber and FC connectors are much less rugged and reliable that electronic connections. Plan for them to fail a lot. Keep bend radius large on the fiber, and tape it down. Thoroughly clean the FC connector after every use. Cut the connectors off from even slightly damaged cables so that they don’t get re-used (I started doing that late one night. After that I got home on time much more often!)
Thanks for your useful and insightful comment. If you wouldn’t mind reposting it at today’s article
it will be noticed by more readers. [I can probably do this myself but (a) am not sure how, and (b) wouldn’t want to do it without your permission anyway.
I’ve been surfing on-line greater than 3 hours today, yet I never discovered any attention-grabbing article like yours. It’s beautiful worth enough
for me. Personally, if all site owners and bloggers made excellent content material as you probably did,
the net can be much more useful than ever before.
My partner and I stumbled over here coming from a different website and thought I should check things out.
I like what I see so now i am following you. Look forward to looking at your web page repeatedly.
It iss easy to understand the concept of clash of clahs has certainly broken many recortds and registered itself as one of the fastest growing games these
days. The shielding meter is also found in the game, bbut if you wanjt to
go far in the game get a fair idea off the gameplay.
Hi there friends, its fantastic piece of writing regarding educationand completely explained, keep it up
all the time.
Comments are closed.