[QUICK UPDATE April 2: I’ve now finished an article giving more details of how OPERA, with LVD’s help, solved the mystery.]
[UPDATE March 31 2 a.m.: following study of the slides from a mini-workshop recording the results of investigations by OPERA and LVD, I now have the information to remove all the guesswork from my original post; you’ll see outdated information crossed out and newer and more precise information written in orange. I’ve also added figures from the talks.]
March 30 5:30 p.m. Two main scientists at OPERA, one leading the OPERA team as a whole and the other leading the neutrino speed measurement, resigned their leadership positions today. The suggestion from the press is that this is due to personal and scientific conflicts within the OPERA experiment, rather than due directly to the errors made in the neutrino speed experiment; but of course the way the measurement was publicized by OPERA caused serious internal conflicts at the time and are surely part of the issue. [Oh, and meanwhile, back over at the CERN lab, some good news: collisions at the Large Hadron Collider with 8 TeV of energy per collision were achieved this afternoon.]
The mystery surrounding OPERA, the Gran Sasso experiment which (apparently through a technical problem) measured that neutrinos sent from the CERN lab to the Gran Sasso lab in Italy arrived earlier than expected by 60 nanoseconds, seems to be on the verge of being is resolved. Statements made by an OPERA scientist in the Italian language press, pointed out to me by commenters (Titus and A.K.), seem to imply that OPERA has more or less confirmed that the problematic fiber optic cable (along with the clock problem, to a lesser extent) was responsible for a 60 nanosecond (billionth-of-a-second) shift in the timing, creating the false result. We do not yet have official information from OPERA about this, but talks given at a mini-workshop a couple of days ago make clear that this is the case.
The way this was done if I/we understand the Italian correctly is something like is the following with all details still very uncertain. Inside the Gran Sasso Laboratory, which is deep underground, the LVD experiment and OPERA experiment are not too far apart, though I haven’t been able to determine the distance; probably a few hundred meters just 160 meters (a meter is about three feet). If a muon (which, depending on how energetic it is, can travel tens or hundreds of meters through solid rock) passes through OPERA, there is some probability that it will later pass through LVD as well, a half a millionth of a second or so later. If the same particle passes through both detectors, it can be used to check whether the clocks at the two experiments are synchronized.
Where might such a muon come from? Well, this is one point of uncertainty. Cosmic rays (high energy particles from space colliding with an atom in the upper atmosphere) create showers of pions, and from there a shower of muons and neutrinos (and their anti-particles). Either (1) a muon (or anti-muon) from the shower can penetrate an exceptionally long distance through the rock and into the Gran Sasso Lab, or (2) a neutrino (or anti-neutrino) from the shower can pass effortlessly through the rock until it hits an electron or an atomic nucleus and creates a muon (or anti-muon) which then makes it to LVD and OPERA. I am still unsure whether (1) or (2) is more common; whichever is more common is what they will use. They use process (1), which is common because there is one direction from which there are an exceptionally large number of muons making it through to the underground lab, due to an unusual thinning in the rock to one side of the lab — and by chance, this direction is one which makes it possible for such muons to sometimes pass first through OPERA and then through LVD. Over the past five years they have about 300 such muons.
These muons will be traveling at (just a tiny bit below) the speed of light, and will traverse the distance between the two detectors in a time that can be calculated (since the distance between the two detectors can be precisely measured inside the lab). Now, if you measure the muon’s arrival time in OPERA relative to the clock that OPERA uses, and you measure the same muon arriving in LVD relative to the clock that LVD uses, this gives you the relative timing between the two detectors. At least that’s my assumption about what they must be doing; I might not be quite right about this. Confirmed!!
Apparently, the time difference between when muons arrive at LVD and then arrive at OPERA, or vice versa, as measured by their respective clocks, shifted by about 73 +- 9 nanoseconds some time in around May to August 2008, relative to what it was before 2008 that time. And then it shifted back, at around the same time that the suspect optical fiber was screwed back in the way it should have been. This strongly suggests that the badly adjusted optical fiber was responsible for the whole majority of the 60 nanosecond shift, and that this shift was stable over all or almost all of the entire period of both versions of the experiment, which I called OPERA-1 and OPERA-2. The question of stability of the fiber’s orientation over that whole period was one of my main worries about whether the real problem at OPERA had yet been found; the new information from the LVD/OPERA timing comparison would suggest strongly that this worry is unfounded.
A big question that the OPERA leadership that resigned today has to answer: why didn’t they do this cross-check before they made their result public? Did no one think of it til recently? And if not, why not? Was it harder than it sounds? Or did they just miss an obvious opportunity?
Anyway, it would appear that the mystery is now solved: that the fiber was the main problem, and that the second problem they identified (with their main clock) was irrelevant less important but not irrelevant; the clock’s frequency is slightly off, so if a neutrino arrives early in a grouping of neutrino bunches its time will be accurately recorded, but if it arrives later in a grouping of neutrino bunches, its time will be recorded a bit later than it should be. This effect counteracts and reduces the 73 nanosecond shift from the fiber down to about the observed 60 nanoseconds. We’d like to hear more details about the technique used in comparing LVD and OPERA;
Also, OPERA reports multiple cross-checks were made internally within OPERA, and all of them agree with the conclusions obtained via the method I’ve described. (These are quite interesting and I’ll try to cover them at a later time.)
Puzzle apparently solved, through excellent detective work by OPERA, both in collaboration with LVD and on its own.
…and we’ll see in a few months, after LVD, OPERA, ICARUS and BOREXINO all measure the neutrino timing independently, whether they all get the same answer. But meanwhile, as often happens during the final aria of a tragic OPERA, a few heads are rolling.
70 Responses
As such, it shouldn’t be too difficult to spend some
time creating a database of tables like what is suggested at where they’ve got provided a sample database
spec for usage in a hotel room reservation
application. So far, reactions to the acquisition, as well as towards the announcement of the
new business model, are actually mixed, but only time will state whether
or not just a revamped version of the massively popular peer-to-peer website will be as successful
since its predecessor. Body Language is nonverbal,
usually unconscious, communication by using postures, gestures,
and facial expressions.
Im having a little issue. I cant get my reader to pick up your feed, Im using bing reader by the way.
Thanks for letting me know; have you tried another method?
@ nic stefan
The speed of light being variable (see my post nr. 12) does not solve the issue. Read my post nr. 17.
Thank you for your reply. But I still
I have a point. You are resting your
whole epistemological case on the
obvious fact of the imperfections of
measurements. You give the
impression that all the other facts
about neutrinos are known and so
the matter is settled. But this was
the whole idea of all these
experiments – to find out more
about the fairly strange neutrinos!
It seems to me a lot of things about
them remain unknown. What if the
neutrino events which appear faster
than light are exactly that?
We already have MINOS, OPERA
it’s true, represents a big question
mark, but even ICARUS found
faster than light neutrinos. You are
attributing everything to
imperfection of measurements, but
let me give one example. You buy a
ball machine, of the kind you use in
tennis courts for training. If you
keep it indoors, and measure the
speed of the balls over a period of
time, you will get a set of slightly
different measurements, due to
variations in temperature, humidity,
the wearing off of the mechanisms,
etc(of course the measuring
instruments will have some
imperfections attached, too). Try to
move it in open air, and you will get
the same variations, but at one
point some guts of wind may slow
or accelerate the tennis balls. That’s
exactly the point with the neutrinos.
How can you be so sure that those
faster or slower than the speed of
light neutrinos are not intrinsic
properties of their somewhat
strange behaviour? Their
speed/parameters could be affected
by yet unknown conditions.
It seems to me that the
overwhelmingly majority of
physicists are clinging to their own
flogistic theories.
By refusing to accept even as a
possibility faster than light particles
you cannot advance science. There are
so many unsolved problems in
physics that nobody can be so
arrogant as to dismiss such a
possibility. You have eminent
physicists which supported
inflation, only to turn out later on
its fiercest critics. What about the
variable speed of light theory which
would solve an awful lot of stuff?
Just before the turn of the XX
century, at the Academy of science in
Paris were presented many papers, very well
supported mathematically, which argued against the
possibility of heavier than air flight!
And the same happened with the
speed of sound, rockets, etc. How
can one be so sure we are not on
the verge of a totally new chapter in
physics? Regards…
“By refusing to accept even as a
possibility faster than light particles
you cannot advance science.”
You insult me, and most of my colleagues, through this statement. What evidence do you have that I, or most of my colleagues, refused to accept this as a possibility? Find on this website, if you can, any statement in which I refused to accept this as a possibility.
It is clear to me that you are the one with the agenda.
I said only this: A detection of an effect that is the same size as the imperfections of the measurement in which it appears is no evidence of anything at all. To accept your claim that we should accept such things as evidence would violate basic scientific technique that I teach to every freshman physics student. And if we accepted your approach, not only would science never had made any progress, there would be many more medicines on the market killing people than there already are.
I still do not understand the question of average measurement. Let’s suppose you have exactly seven people over 2.20 metres in height, in a town of 200.000 people. If you take the average measurement of men’s height in this town, you will get probably about 1.80 metres for an adult male. According to statisticians logic, you cannot have persons of 2.20 metres in height, but still they are there, for everybody to see. Neither the instruments are imperfect, nor the conditions differ, from one measurement to another. Now imagine this seven persons are as many neutrino events, out of which three or four ar faster than the speed of light. And then what?
Yes, you are very confused, because you are still assuming all measurements are perfect, and focusing on the wrong issue altogether.
This is not an issue of averages. It’s not a statistical issue. It is an issue of imperfections of measurements, and what knowledge you can draw from an imperfect measurement — an epistemological issue.
In order to know what you know, you must know how precisely and accurately you know it. The example you give is totally off the mark; it has nothing to do with this issue.
Let’s suppose everyone in your town is blind. Nobody knows anybody’s height. Nobody has any way of knowing. But one day someone decides to take a ruler and measure. So they go out and do a measurement of everyone’s height using a ruler. 7 of the people are measured to have a height of 2.2 meters. Is it true that there are 7 such people, or not?
That depends how accurately and reliably the people using the ruler made their measurements. If they only made a mistake once every million people, then you can be sure, there are at least a few people with a height of 2.2 meters.
But if every now and then they make a mistake of 10 centimeters, and very rarely they make a mistake of 20 centimeters, and very very rarely they make a mistake of 30 centimeters, and very very very rarely they make a mistake of 40 centimters, then out of 200,000 people they may occasionally measure someone to have a height of 2.2 meters — even if every single person in the town has a height that is exactly 1.8 meters.
So to tell the difference between the two possibilities:
a) there really are 7 people in the town who have a height of 2.2 meters, and
b) everyone in the town has a height of exactly 1.8 meters
requires a level of accuracy and precision that the measurement I’ve just described cannot attain.
Therefore it is not possible to conclude, from the measurement, whether (a) or (b) is correct.
The statement about the neutrinos is that the fact that a few of the neutrinos appear to arrive early is not conclusive of anything. The measurement is not precise enough to distinguish the two possibilities:
a) there really are a few neutrinos that travel 1 part in 1,000,000 faster than light
b) all neutrinos travel at exactly the same speed (1 part in 1,000,000,000,000 below the speed of light.)
Thus the conclusion of this experiment is that there is no evidence for any faster-than-light neutrinos, because the experiment is consistent with there being no such neutrinos. It is true that there is also no evidence against neutrinos that travel faster than light at 1 part in 1,000,000. But we don’t believe things based on the fact that we don’t have evidence against them. I have no evidence against there being dragons living inside the moon, but that doesn’t mean their presence is plausible.
This is, again, not a statistical point. It is a point about knowledge. If you don’t have clear evidence, then you can’t claim knowledge. And you shouldn’t go around believing things just because they haven’t been shown to be false. You should believe things for which there is also some amount of evidence that they might be true. We have no such evidence in the case of faster-than-light neutrinos; the imperfections of the measurements are too large for you to draw conclusions based on those few neutrinos that seem to be going too fas.
A clarifying example: imagine you are given a speeding ticket in a 100 kilometer per hour zone. The police officer says you were driving 100.000001 kilometers per hour and deserve the ticket. Your lawyer will obviously point out that there’s no way the officer could possibly have made such a precise measurement of your speed, and for all the officer knows your speed might have been 99.999999999 kilometers per hour, below the speed limit. And I am sure you will not dispute your lawyer’s argument in this case.
A war with the OPERA team?
Making the rise time smaller (the curve steeper) would have just made the anomaly a smaller value (say 10 ns, instead of 70 ns). I think their timing chain is derived from an earlier experiment called ANTARES, and some of this might be legacy hardware.
I think the issue is even with the unscrewed connector the light intensity is attenuated only by roughly 35% or so. The time to get to 5v should be directly proportional to the light intensity (assumed constant over the whole period). The voltage seemingly is proportional to the log of the total light intensity over time (my R-C circuit memory is rusty), and if one assumes the intensity is a constant-depending-on-connection multiplied by time, we get the linear relationship. The time to hit 5v seems to be more than 200 nsec or so for the “right” case, and the additional 70 ns is just 35% of that.
I think the problem is such an attenuation of just 35% will possibly be seen if the fiber lengths differ, or if the splitter (which is passive with no amplification) is more loaded. Seems to me designing for the just one specific light intensity will cause the circuit to fail if the fiber is changed or another splitter added.
I guess the lesson to take home would be to do periodic checks of all calibrations, especially since they don’t seem that particularly time-consuming.
AK, I do not think you are being “too grouchy about the whole thing.” I think you raise several good points. This has been 6 months of…(Matt wouldn’t post it). Anyway, we have a great story, seems to be air-tight, all the holes have been filled…and yet until these neutrino tests are rerun as planned, that is all it is – a story. I’m sure you have read of cases where a person is put in jail based on seemingly air-tight evidence only to find out years later that he is innocent based on DNA. A lot is at stake here, possibly one of the pillars of 20th century physics, and I just want to be absolutely sure. I know these amazing machines and detectors have other work to do, I know this costs time and money, but I still think it needs to be done. I know of at least one member of the ICARUS team publicly calling for an end to any further testing of neutrino velocity. All I want is a few good clean up-to-date experiments showing that a desent number of high-energy (up to 200 GeV would be nice) muon-neutrinos produced here on Earth are moving at v = c.
@ Matt Strassler, March 31, 09:55
“…because of the imperfections of measurement technique;”
Maybe the “loose cable” was just a poor excuse for in fact not understanding what really is the matter. Maybe fundamentals of relativity are at stake. The claim of an Australian team that the Alpha constant might not be constant after all is an example of existing doubts with regard to laws of physics being independent of time and place.
If the
“loose”incorrectly screwed-in cable, along with the clock drift, were not at fault, then the precise measurement of the correlation of the timing between LVD and OPERA would have given a different answer.Yes, fundamentals of relativity are at stake, and that’s fine. The same with changes in the fundamental constants over time and space (something which I’ve studied and worked on myself.) Whatever we do in high-energy physics, there’s always something big and important at stake — that’s why we do it.
But the measurement devices have to work properly, or instead of learning information about the world, we end up misled. And that’s why no one experiment or observation is ever, by itself, taken for granted.
The order seems to be (Sirri, slide 5):
December 6 to 8: Measurement of the fiber delay showed an extra 70 ns over the calibrated value from 2007.
December 13: Fiber screwed in fully and new measurement showed the fiber delay going back to 2006 value.
As to the photo from Oct 13, 2011, Autiero expilcitly stated the state of the connector during data gathering was unknown. This was only half-true; its state during OPERA-2 was fully known.
Per Cartlidge (http://www.sciencemag.org/content/335/6072/1027):
“Scientists who wish not to be identified say a few persistent OPERA researchers spotted the problem during tests the collaboration’s leaders at first opposed.”
Farther down, he says:
“A source familiar with the experiment says some researchers thought the measurement should have been rechecked before the neutrino velocity results were submitted to a journal in November, but OPERA’s scientific management resisted carrying out such a check.”
In Caren Hagner’s interview (the German report), she stated:
“We detected that if the cable is tilted just a little bit from the ideal position the little box only receives parts of the signal.”
This statement is technically true; most people would consider it plain obfuscation. What the photo shows is not a cable “titled just a little bit from the ideal position.” It shows a cable not screwed in right.
Maybe I am being a bit too grouchy about the whole thing. Oh, well.
You’re partly right about this, but also look at the timeline in Sioli’s talk, pages 2-5, which gives a more complete overview.
We need to learn more, but in the *notes* of Sirri’s powerpoint it specifically says that the shift in the timing due to the improperly plugged fiber was controversial at the time… for reasons not explained, unfortunately.
In Sioli’s talk it is clear that they didn’t start looking seriously at the timing chain until after OPERA-2 ran. Why, I don’t know, but they looked at many other things between September and November (checking their analysis, their statistical technique, their use of general and special relativity, and some trick with cosmic-ray muons that is not explained), and only got to the timing chain in December.
When they did, they very quickly found this timing problem, and, as you say, they saw that screwing the fiber back in changed the result by something just a bit larger than the famous 60 nanoseconds. They also found the problem with the clock, which could have shifted things back.
To guess why there was a two month delay at that point would be speculation. But it would appear from what Sioli writes that there was no consensus as to whether these two problems, which shift the result in opposite directions, were responsible, because (a) they didn’t understand the physical mechanism behind the two problems, and (b) they were not sure the problems had really been there during both OPERA-1 and OPERA-2.
The big question is why it took them so long after this point… both to figure things out, and why they didn’t reveal the situation to the world sooner. I’m sure there were bitter arguments internally within the collaboration.
But Sioli’s talk makes clear that from then til February they were working to understand *why* the fiber could create this delay, and what was causing the apparent clock drift. I’m speculating here, but my guess would be that there were multiple interpretations at that time… and this may be why there was no consensus as to whether to reveal what they knew.
The answers (according to Sioli, slide 4) came in February, not in December. I don’t know why; perhaps there were conflicting pieces of information initially. I suspect we’ll learn why. And once they understood that (a) the fiber could cause a stable, repeatable delay, through a mechanism explained in Sirri’s talk, and (b) the clock had a tiny drift that would accumulate to a few nanoseconds over each 0.6 second data-gathering period, they could then predict how these two effects would collectively affect their measurements.
Then they could use the LVD-OPERA comparison to see if that prediction was correct. And it was.
All of this represents excellent detective work.
But it also raises questions as to why the problem was not caught earlier, and why these tests were not done *before* telling the world about their result. Checking each link in the timing chain seems, from the outside, like something that should have been done *before* any press conferences.
And that is probably a view shared by some within OPERA — perhaps enough to generate resignations of the leadership of the experiment as a whole and of this particular measurement. You can imagine that many in OPERA feels that this leadership has unnecessarily dragged the reputation of OPERA, and by extension its members, through the mud.
The statement you quote in Science magazine doesn’t quite make sense. It may be that the leadership felt the timing chain was a less likely source of a problem and made it a lower priority compared to a few other things that were checked during the September-November period. Others within the collaboration may have felt that the leadership was too confident in the equipment.
My own experience watching many other measurements fail in a similar way is that some subtle feature of the equipment is commonly at fault, so in this sense the failure of the OPERA leadership to do a complete bevy of equipment tests before the September press conference does raise serious questions about their scientific judgment. They kept saying that they’d checked everything and couldn’t think of anything else, so that’s why they went public with their anomaly. But re-measuring every step in the timing chain would seem to be a sine qua non before saying “we checked everything”.
By the way, about that interview in German — the translation of “orientation of the fiber” is ambiguous. I think that what she said is consistent with “not screwed in properly” (but my German isn’t good enough to be sure and the translation someone was different.) But she does give the impression that the effect is more delicate and subtle than it would appear to be from the Sirri’s photographs, where it is clear that it really just wasn’t screwed in all the way — apparently not so subtle, and more damning.
Yes, I agree with your point that the connector issue looks so all-important only in hindsight. Back when they were running cross-checks they were probably more worried about the “new” things like the GPS circuitry and the relativistic corrections. I think another possible reason is mentioned in passing in one of Cartlidge’s many articles on OPERA FTL:
“The investigators discovered that the pulses’ transit time varied by several tens of nanoseconds depending on how tightly the coaxial fiber cable was plugged into a socket attached to a card inside the experiment’s master-clock computer. The card converts the light pulses into electronic signals. Any loose connection was supposed to stop the pulses from being registered, but instead it appears that the card allowed the delayed pulses to get through.”
So the card was seemingly designed to detect loose connections but failed to do so (incidentally the card was developed at IPN Lyon, Autiero’s institution). But note they kept using the word “plugged in” (Cartlidge is based in Rome, so this is unlikely a translation issue).
As to your (b) “they were not sure the problems had really been there during both OPERA-1 and OPERA-2.”
Well, they had pictures showing the connector was loose before the OPERA-2 run started (the Oct 13 pic), and that the connector was still loose after the run ended (the Dec 6 pic). The OPERA-2 run was in late Oct/early Nov. These were looked at by Dec. 8, 2011 (at least I assume so, since this would have been high priority at that point). Technically, yes, the connector could have shifted position between how loose it was in Oct 13 and on Dec 6. So saying they were unaware of the situation for OPERA-2 (by February 22) is perhaps technically true, but barely so.
As to the leak driving the announcement instead of the opposite, note that the Feb 22nd announcement contained no details. They could have made an announcement with that level of specificity in January (or perhaps December) itself. It seemed coaxed out of them. The leaks were in no way “semi-official”; as the wording from Cartlidge I quoted shows they were from sources at odds with the “official” way.
I realize the pressures OPERA leaders were under, once they went public with the FTL result, and the desire now to let this die down gently and gracefully. I actually agree with that; mistakes are human and while the researchers directly responsible will probably get a private talk from the boss, there is no reason for public crucifixion or extremes such as firings. But I do think it important that the mistakes made in how information was released be acknowledged.
I’m not sure you’re reading “supposed” correctly. It may be used in the sense of “assumed” rather than in the sense of “intended”. That is, they may have assumed that since everything was working stably there was no possibility of a improper connection.
About the leak: My assumption had been that they were pressured by rumors that ICARUS had a negative result and would be publishing it soon. But of course a leak within OPERA is also possible, and perhaps, given what we now know, more likely.
About the pressures: again, the point isn’t just to let things die down gracefully, but to do so in a way that saves face. What OPERA had to do in February *was* embarrassing: they had to say “something is wrong and we’re not sure we have understood it yet.” They would have done much better if they had been able to say “we found our mistake, and here is exactly what it was and how we figured it out.” That’s a situation professionals would be much more comfortable doing.
But of course the nature of the mistake — a cable designed in the lab that was supposed to carry out the measurement — is embarrassing. Especially as it seems more and more likely that it could have been detected had a more complete program of investigation been carried out before going public.
None of the sources explain how an incorrect fiber optic connection can result in a 60 ns delay. That is orders of magnitude too large for a connection alignment issue and orders of magnitude too small for data retransmission necessitated by a poor signal. The explanation that part of the signal was removed makes no sense, and the only possible interpretations of such an issue would result in a mean zero error.
I do (mostly) know the answer — it has to do with how the box that the optical fiber works — but am too tired to go through it now; I promise a sensible explanation within a day or two. The correct answer was actually inferred by one of my commenters to an earlier post on OPERA, either http://profmattstrassler.com/2012/02/27/why-the-curtain-has-not-fallen-on-opera/ or one of the four earlier posts that preceded it in the previous days.
I can’t find the comment to which you refer. Could the delay be due to error correcting code computation when fiber optic signal checksums fail? That could be in the right neighborhood if it was implemented in hardware.
Check out Eric Schumard’s guess after http://profmattstrassler.com/2012/02/24/finally-an-opera-plot-that-makes-some-sense/ . It is an analog-electronics effect, not a digital one. More details in my article to appear tonight or tomorrow.
Let me try my hand at explaining this (since I am no physics expert, take this with a teaspoon of salt, though not a cup since it is based on Sirri, slides 6 and 7):
That fiber is not a communication fiber like MM/SM OC-3/OC-12/OC-192 etc, carrying protocol packets. It carries a single pulse every millisecond from the GPS receiver above ground. This pulse resets the master clock, which ticks with nanosecond precision and can be relied on not to drift more than 10 ns within 1ms. The common 1 ms pulse derived from GPS (common to LNGS and CERN) is used to arrest further drift via the reset.
The pulse detector on the receiving card (the PCI card on the PC), in simple words, integrates the incoming light intensity (with a light diode capacitor) and detects a pulse when the integrated value exceeds a threshold. With a loose connector, what seems to happen is the incoming intensity is lowered, but the incoming light shines long enough to still trigger the detector. But because intensity is less, the detector has to wait longer to hit the threshold. As a result, there is an added 70 nsec delay before the GPS signal gets from the outdoor receiver to the FPGA which timestamps detected events. Note that though the GPS receiver sends a pulse per millisecond to the clock card, what finally gets to the FPGA is actually a pulse per 0.6s (the DAQ reset signal). In effect, this clock the FPGA sees ended up slower than the real GPS one by 70 ns, while the CERN side sees the right clock. The net effect is to make the neutrino journey appear shorter in time (faster) than it really is.
This is correct; but do you understand how the pulse-per-millisecond becomes a pulse per 0.6 seconds? I am still confused about this technical point…
One correction – the fiber does carry an 80-bit code in addition to the start pulse but that is not particularly relevant since the master-clock uses the start pulse for the 1 ms clocking, and its detection is what is getting delayed.
Ah; thank you both. As Eric Shumard points out — at http://profmattstrassler.com/2012/02/24/finally-an-opera-plot-that-makes-some-sense/#comment-6746 — dispersion in a single mode fiber is much less than a nanosecond even over 8 kilometers. But simply reducing the intensity of the fiber optic signal could delay a multiple photodiode circuit designed to “de-bounce” spurious signals caused by dispersion over longer distances, so I am completely satisfied with this explanation.
The photodiode (http://www.jdsu.com/ProductLiterature/etx100_ds_cc_ae.pdf)
produces a current that is roughly linear with the incident light power. The photodiode drives a capacitive and resistive load so that the voltage across the capacitor changes at a rate that is dependent on the amount of incoming light. When the end of the optical fiber is further from the photodiode, the amount of light that hits it is less, presumably because the light coming out of the fiber is a cone with some opening angle which means the photodiode intercepts a smaller fraction of the light. The photodiode itself is really fast with sub-ns rise times. The capacitor connected to the photodiode is apparently very large so that the rise time is stretched considerably. I can’t think of a good reason why the capacitance is so large since it doesn’t need to be. This photodiode is capable of resolving GHz signals and it is used for that in many applications which obviously would not work if a capacitor slowed things down to give a 100 ns rise time. If the capacitance were small, then the change in the rise time would be small and the effect of not screwing the fiber in all the way would be correspondingly smaller. It would also be a more robust design since capacitors change their value over time and temperature. Perhaps if this board had been designed correctly we wouldn’t be having this discussion.
The 0.6s is the length of the DAQ cycle, which begins with the start of an accelerator cycle. The accelerator cycle is 1.2s. There are several clocks involved in OPERA. There is a reference clock on the PCI card that the optical fiber from above ground connects to. This clock is locked to the GPS derived signal on the fiber and is updated every 1 ms. There is also a master clock derived from a 20 MHz oscillator. This is a free running clock and is not locked to anything. It is the clock that was discovered to deviate 124 ppb from nominal. At the start of the accelerator cycle (the start of the DAQ cycle) the time kept by the reference clock is recorded and the time kept by the 20 MHz master clock is zeroed (or some equivalent). Whenever an event happens, the time is recorded as the sum of the elapsed time kept by the master clock since the beginning of the DAQ cycle plus the reference clock time at the beginning of the DAQ cycle. The timing error from the reference clock is 73 ns, in the direction of making neutrinos appear to arrive early. For the bunched beam data (neutrinos bursts lasting a few ns) the average time of the events from the beginning of the DAQ cycle was 75 ms. This gives an average deviation of the master clock time of 124 x 10^9 x 75 ms = 9 ns, in the direction of making neutrinos appear to arrive later. These add up to making neutrinos appear to arrive about 64 ns early.
Don’t worry about misspelling my name. Everybody wants to add the c. Also, a long time ago I was part of the IMB proton decay experiment (SN1987 and all that).
Thanks, Eric. I have a less technical discussion appearing in the article I just finished, but I think it is consistent with your statements (though I skipped some subtleties) and I’m glad to have your more technical remarks. I’m also glad to hear your question about the large capacitance; I didn’t have a clear idea as to why there was such a long rise time and I’m interested to hear that you also are wondering. That is a design question to pose to the OPERA neutrino-speed team; apparently this device was actually designed by Autiero’s group, and one would have expected they would know about this possible failure mode, but perhaps they had much more limited experience with this type of electronics than you do.
Also glad to learn more about your background. Thank you for your help.
Could the capacitance be so large in order to ignore spurious signals from dispersion in much longer fiber lengths?
The motivation for the large capacitance in the photodiode circuit is unclear. I’m speculating that it was a mistake but that the design worked and was stable so victory was declared and everybody moved on. Dispersion in the fiber shouldn’t result in spurious signals but rather should smooth out the edges of a sharp pulse. It was correct to be skeptical when first hearing that a loose optical fiber connection would result in a delay of 10s of ns since that is not how a typical optical system works, at least not one designed for ns timing.
One more comment…I really sympathize with the members of the OPERA collaboration. This experiment is very difficult and would not be possible without their dedication, intelligence and integrity. This episode is a demonstration of science working. Sometimes the human part of the equation makes things messy but in the end it works.
They have circuitry on the master-clock card – an FPGA. That could be set up to generate a 0.6 sec signal from the 1 msec one. I think something as simple as just triggering on every 600th incoming pulse would work. The 0.6 sec signal would also then have the exact same delay of 70 nsec, though its frequency would indeed be 0.6 sec. They have another finer clock, a 50 nsec one, from the Master-Clock FPGA to the Timestamper FPGA. Kind of like the hour hand and the minute hand of a clock. If the hour hand is consistently off by one, however precise the hands, the resulting time is off by an hour. Incidentally I think it is this 20 Mhz clock that is responsible for the +/- 25 nsec jitter present in OPERA-2. Looks like ICARUS too has this, though I don’t know why.
There is a picture and a block diagram of the PCI card here:
http://www.lngs.infn.it/lngs_infn/contents/lngs_en/research/experiments_scientific_info/conferences_seminars/seminars/AUTIERO_LNGS_CNGS_Jan07.ppt
The card and the FPGA seemingly were developed inhouse, and I couldn’t find any documentation other than the ones on the neutrino-velocity ones.
The OPERA DAQ system is documented at:
http://www.ipnl.in2p3.fr/declais/OPERA-DAQ-march2002.doc
This is quite old, and probably obsolete.
I have written an article on the OPERA diagnosis and repair of its mistakes, which I will publicize tomorrow. First, thank you for your help; you (and Titus and Eric Shumard, and perhaps some others I’ve forgotten) made it far better than I could have made it alone. Second, if you are curious to look, and interested to comment, please feel free. It is at http://profmattstrassler.com/articles-and-posts/particle-physics-basics/neutrinos/neutrinos-faster-than-light/opera-what-went-wrong/
Matt, you’ve been spelling Shumard wrong.
Thanks! Fixed.
Hmmm, Sirri in slide 7 mentions the FC connector is screwed in, not just plugged in. The photo in slide 8 also seems to indicate that. That means for the connector to be loose, it should have been unscrewed. Accidental jarring of the rack etc would not be enough, since, after all, it stayed the same way over years.
Sioli, on page 5, says the “Anomalous condition” arose from an “intervention on the fiber connector.” I guess that means somebody must have unscrewed it, say to pull the PCI card, and then failed to screw it back in properly. Ironically, that connector is the one easiest to access in the tangle of cables (Sirri, slide 8). I think they should be able to check whether the PCI card was pulled out in mid 2008 (the PC would have to be rebooted, and this should show up either in its system log, and in the network logs, with the link interface to the switch/router flapping). If it turns out the PC was never reset (and hence the PCI card never pulled out), one would have to wonder about what happened.
Guess this is no longer a physics discussion, but I suspect it is an issue that does affect physics.
Loose cables sink fables.
I’m disappointed that some of the leadership had to resign because in my view they did everything correct: 1. Looked for errors without making the result public 2. Made the result public without saying neutrinos were travelling faster than the speed of light.
Knowledge is about double checking a result and even if a mistake is found, people gain from the experience.
I doubt very much that they resigned because the mistake was made. Mistakes happen; we’re human. The big issue is how the process was handled. Perhaps this mistake could have been found before there was any press conference?
I agree mistakes do happen. But the main issue, I think, is Autiero/Ereditato failed to go public with the error immediately after OPERA found it. The fiber check was complete on Dec 8, 2011 (Sirri, slide 5 on http://agenda.infn.it/materialDisplay.py?materialId=slides&confId=4896). They kept the results to themselves for two-and-a-half months until an anonymous source (Luca Stanco?) leaked it to Science’s Edwin Cartlidge (http://news.sciencemag.org/scienceinsider/2012/02/breaking-news-error-undoes-faster.html?ref=hp). They had photos showing the connector loose by Oct. 13, 2011, clearly invalidating OPERA-2 results (Sirri, slide 8). Even after the ICARUS true (second) refutation, Autiero insisted a rerun was still needed to check if OPERA was wrong (http://www.nature.com/news/neutrinos-not-faster-than-light-1.10249).That was totally unwarranted–logically the ICARUS answer had to be preferred since for ICARUS to be wrong, both an experimental error and one exactly equaling neutrino superluminality, would be needed. I think this holding back had real consequences; MINOS and possibly T2K probably continued with their preparations and wasted money and time on them.
But I think the reaction of the scientific community also does needs critical evaluation. The C-G theory was not a refutation, and they should not have said so in their draft arXiv paper or media announcements (I notice they changed the wording by the time the paper got to PRL). The paper only severely restricted possible superluminal theories. That applied to the Cowsik et al. pion decay point too. The first ICARUS refutation was actually a replication of a part of the OPERA experiment (no C-G decay seen), and added nothing (and that result got reported twice as brand new). Most of the initial objections to the result were plain wrong. Contaldi’s theory of GR effects; Van Elburg’s theory of SR effects; the many objections to the statistical procedure of OPERA-1;were all incorrect.
In the end, from an experimental point of view, seems like OPERA did leave us two things: a mathematical technique for evaluating a time shift (the ML statistical procedure; my old background helps me appreciate its worth), and a technological improvement in measuring timing in particle physics.
One final point, at least partly from my current background in psychology: why is it that neutrino speed/mass measurements mostly tend to make errors toward the superluminal (that includes imaginary mass)? Why aren’t these errors more symmetrically distributed around ‘c’ and mass of 0? Note that the loose connector is astonishing. It came loose at precisely the right point: after calibration of the link and after the 2007 LVD-CNGS intercalibration, but before data collection. Is there some kind of nonconscious bias toward setting up the timing/distance/mass reference so that errors leaning toward superluminality are more likely?
I think you are too quick here.
The photo taken Oct 13th was apparently taken for some other purpose, and not recognized as a source of a problem at that time.
The measurement Dec. 8th indicated some kind of problem, but not the source of the problem. The fiber was not yet known to be the cause. Any reasonable experimenter would seek to narrow down the nature of the problem before broadcasting its existence; suppose, for instance, one of the two measurements that were discrepant had simply been themselves in error? Or suppose the time shift discovered was something that was irrelevant in the short-bunch measurement? You can’t just toss random information around in science; you’ve got to nail it down. That said, a two-and-a-half-month delay *does* deserve an explanation. Maybe we’ll get one, maybe we won’t. In any case, it clearly took them some time to find the fiber problem and the clock problem (I base this on an interview in a German newspaper that we discussed last month) and even longer to figure out how they worked together, one giving a positive shift and one a negative shift, to create the apparent 60 nanosecond early arrival in a stable way.
Finally, the statement that Auterio wanted a re-run isn’t entirely fair either; it would appear that they only determined for sure what had happened over the last few weeks, after their announcement that they had a problem. And I don’t yet know if that announcement came because there was a leak, or the leak came because there was going to be an announcement. Are you saying you know this for sure?
My own speculation — probably wrong — is that when they discovered the problem in December they probably knew right away that they’d almost certainly found a serious mistake… but they didn’t know its details. Their only way to save face was to be able to announce to the world, with the confidence that scientists always want to have when they make scientific statements, that they’d found precisely where their mistake was and that their result was now consistent with Einstein. It took them too long, so eventually they were forced to say something about their errors a month ago, either because of internal pressure or because of a leak, and then on top of that, ICARUS won the race.
In any case, the real justification to re-run these experiments in May is likely to be not the neutrino speed measurement but something else: for instance, the development of new techniques that can be used to improve existing and future neutrino experiments. It would not surprise me if better timing may allow the development of various tricks for reducing the backgrounds from cosmic rays. In short, it is likely that there is an important silver lining.
I don’t know if your statement that neutrino speed measurements have been systematically shifted toward the faster-than-light direction is correct.
Prof. Strassler,
Is it the presence of URLs which makes some of my posts get held up for moderation? They are on topic and the URLs are relevant; is there anything you can set on your set to make the URL check more intelligent (a link to INFN should clearly pass the check)?
Can a damn Friday pass without me having to worry that all the planned tests on the velocity of high-energy muon-neutrinos will be cancelled!?
Let’s forget OPERA – and their ever more evident technical glitch (where the hell were these LVD guys with their 2008 data 6 months ago?). Let’s forget ICARUS – and their 7 neutrino events of unknown energy…
Way back in 1979 a paper (Experimental Comparison of Neutrino, Antineutrino, and Muon Velocities) in Physical Review Letters found evidence of what may have been faster-than-light high-energy (32 GeV to 195 GeV) muon-neutrinos. The data “appear to show a rise (above c) with increasing neutrino energy. A best-fit line is v-c/c = (.3 + .003E)10^-4.” where E is the muon-neutrino energy in GeV. The authors said in the pre-print: “In the absense of a definite reason to choose this deviation (from v = c) and in view of the spread of uncertainty…this line is not sufficiently different from a constant value to be significant.”
In 2007 MINOS found to the 99% confidence limit that muon-neutrinos with an energy spread from 3 GeV to 120 GeV had velocity between – 2.4×10^-5
< v-c/c < +12.6×10^-5.
So it seems to me, despite OPERA and ICARUS, there is still good reason to conduct the planned tests on high-energy muon-neutrino velocity. DO NOT CALL OFF FURTHER TESTING.
The high-energy argument doesn’t cut it. OPERA’s results are now, after the corrections, purely light-speed for high-energy neutrinos. And, yes, taxpayers pay for this–the MINOS replication would cost half-a-million dollars (http://www.nature.com/news/timing-glitches-dog-neutrino-claim-1.10123).
I don’t know whether the replication would get us additional information such as a better estimate of neutrino mass? If not, then one does wonder as to its utility. I agree with you it is strange these neutrino measurements always tilt toward the superluminal side, but I suspect that has more to do with psychology than physics. Even when data analysis is blind, the experimental setup seems to be biased toward erring on the superluminal side.
Icarus is symbolic of thoughtless behavior, but what’s in a name? You can measure the speed of neutrinos a million times and you get one million different results if your measurements are accurate enough of course.. .
I would say ICARUS is symbolic of hubris — of wanting to fly too high. Ironic that it is OPERA, not ICARUS, that seems to have suffered in this way. (Though the ICARUS people kept saying “refute” before they’d actually refuted…)
Every measurement gives a different answer because of the imperfections of measurement technique; knowing the imperfections of one’s experiment is the key to determining whether observed variation is due to these imperfections or to a previously unrecognized effect in nature.
But could neutrinos still go slightly faster underground than the speed of light ?
I recall from the Icarus experiment, published 16.03, that one particle was measured to go 19 nanoseconds faster than lightspeed, and two/three others also was measured to go faster than light !
So again: The pussle with superluminal neutrinos/particles underground is not 100 % solved, in my opinion. You, professor Strassler, and your colleagues around the world involved in the matter, should not close your minds yet, of course. Arrogance is allways the road to …… But I am not saying that you are arrogant, just that you should not close your eyes yet since there still has been measured neutrinos slightly faster than lightspeed.
Best regards
me (who finds this blog among the top 3 science blogs, where Dudus Motls blog is still nr. 1 because of his lack of inhibitions, and because his sense of humor is extreme) 🙂
“I recall from the Icarus experiment, published 16.03, that one particle was measured to go 19 nanoseconds faster than lightspeed, and two/three others also was measured to go faster than light !”
This is the wrong conclusion.
Suppose you are exactly 6 feet tall. If I set out to measure your height with a ruler, again and again, I will occasionally find 6 feet, sometimes 5 feet 11.6 inches, and sometimes 6 feet 0.3 inches. That does not mean that your height is changing. It means that my measurement technique is imperfect.
Measurement techniques are always imperfect. One of the most important (and poorly appreciated) parts of doing an experiment is determining how imperfect your techniques are. According to ICARUS, the imperfections in their techniques assure that some neutrinos will appear to arrive 5 to 10 (and occasionally even 20) nanoseconds early or late. But much of the effect of these imperfections will be removed if you look at the average arrival time. That is why there is a distribution of neutrino arrival times, sometimes early, sometimes late, but (a) consistent with an average arrival time of zero [and therefore inconsistent with OPERA], and (b) consistent with the known imperfections in the measurement techniques used by ICARUS.
Strassler: No way that ICARUS can prove that it is the measurement techniques within icarus that is the cause behind the result of some superluminal and some subluminal neutrinos which cause the average arrival time of zero. I might as well be that it is some neutrinos that sometimes moves slightly faster than lightspeed.
Suppose I am exactly 6 feet tall. If you set out to measure my height with a ruler, again and again, you will occasionally find 6 feet, sometimes 5 feet 11.6 inches, and sometimes 6 feet 0.3 inches. That might not mean that your measurment tecnique is imperfect. It might means that your height is slightly changing due to various changes in the nature around and within me !
So again: With all respect, please do not close your mind yet!
My mind is not closed; but there is a big difference between an open mind and an empty head.
I could not agree more. Only empty heads, such who believes in ekstra dimensions because his colleagues does, answers in such ways!
“It might means that your height is slightly changing”. But a) this is precisely why you need to understand your measurement techniques first b) this is why you can’t say anything on these putative effects if they are swamped by measurement error as in this case.
The expectation is that these, massed I take it, neutrinos are actually a bit slower than light. But it can’t be tested yet. (SN neutrinos of different energies and type, I think, has been shown to be slower though, IIRC.)
On slide 8 of the powerpoint on this site is a photo of the card with the infamous connector, with the lab itself shown in slide 13:
http://preview.tinyurl.com/84hgus3
The card with the connector was developed at IPN Lyon, Autiero’s place:
http://www.ipnl.in2p3.fr/?lang=en
The mean error due to the bad connector is 73 ns. This is from an actual measurement done all the way back in Dec. 6, 2011, though not made public until now. While the clock drift is as high as 120 ns/sec, because of the nature of the drift, it doesn’t contribute that much. The net effect is to exactly wash out the 62 ns early arrival. The connector is obviously loose by Oct 16, 2011 (the photo is rather clear in slide 8 of the G._Sirri presentation):
http://agenda.infn.it/materialDisplay.py?materialId=slides&confId=4896
The LVD-OPERA time delta does shift by almost 70 ns in 2008-11 – see Sioli’s presentation at the above website.
By and large, I think the May tests are meaningless.
Thanks. I have read through all the talks and I agree with your interpretation; and I have now updated the post to reflect it.
OPERA and ICARUS are separated, on average, by 170 m (roughly 550 ft).
That is roughly 573 ns of flight time. A measurement done in 2008 showed coincident cosmic events’ (their words) distributions differing in their mean by exactly 573 ns. The distribution for the coincident CNGS neutrinos also showed the expected median of zero and a limit within the 10.5 usec spill (the original CNGS beam).
http://preview.tinyurl.com/7jbnwlp
So, yes, inter-detector clock calibration between OPERA/LVD does seem to have been well-established in LNGS (there is a book about it from 2008). I assume the issue would have been going back and tracking all the cosmic neutrinos, since both detectors are primarily interested in CNGS neutrinos?
I presume when they checked the 2009, 2010 and 2011 data, they found the means for the cosmic events to differ by 513 ns? If that is true, stopping everything on the LNGS side, and changing the beam at CERN again, seem a waste of time?
It actually sounds like a great end to the story! The real story is one of scientific mystery and technical detective work under a new level of public pressure and external scrutiny. Perhaps including a minor cautionary tale about unringing a hype bell, but not likely any sort of scandal. It could be made into a superb movie that could expose and glorify the intricacies of modern science, and show how science sets itself apart from all other human endeavors by its inherent ability to self-correct and resolve conflict by evidence and discovery, resulting in a lasting concordance by all parties. IMO that’s the beauty of science, and this story demonstrates it well, and from within a sea of compelling intrigue and imaginative speculation, the thrilling reminder that pillars of our collective understanding can be questioned, but will only fall when met with ‘extraordinary evidence’. I feel privileged as a reader of this blog to have had such an informed and insightful view of this whole OPERA epidode!
Thanks for your kind words. I agree with your point of view. This was a triumph of the scientific method. I am not sure it will be viewed that way by the public, but it should be.
Have you seen Gran Sasso’s mini-workshop on the neutrino results? http://agenda.infn.it/materialDisplay.py?materialId=slides&confId=4896 (If so, why is nobody talking about this?) The slides there seem to explain (in English) the muon analysis in some detail, so it may be a more useful source than the information you’re getting from the Italian press – though it looks like what you’ve written is pretty much the same as what’s explained on the slides. They report a preliminary timing difference of (-1.7 +- 3.7) ns once both effects are accounted for.
Thanks for this crucial link. Apparently I guessed exactly right. There really wasn’t another sensible guess, though.
Yeah, still, it’s nice to have the actual original source.
By the way, I’m a big fan of your site! It’s definitely my favorite physics blog.
Oh, definitely; I hate posting with speculations, since there’s a risk I’ll mislead people.
And thanks!
In this story there is no innocent, no one pointed out a mistake, so everyone should be fired and not just their boss.Thats what I think.
Well, if you do science for a while, you’ll realize how hard it is. Firing everyone would serve no one; and besides, they are not “fired” from the experiment, just resigning from the leadership. They will probably remain within the experiment.
And these are excellent scientists. It would appear that where a thousand mistakes could have been made, they made just one. [I’d like to see *you* do that well with something this complex.] And that was not to cross-check the timing from the optical system to the electronic system. Humans are not always perfect; sometimes they manage it, and sometimes they get close but miss.
There will be no call from within the scientific community for them to face penalties. They were honest about what they did, explained their techniques publicly, revealed a mistake when they found it, and spent the time to confirm that the mistake they found was apparently the cause of their erroneous result. This is professionalism in action.
You fire scientists when they lie. These people did the opposite.