Did you know that another name for Minneapolis, Minnesota is “Snowmass”? Just ask a large number of my colleagues, who are in the midst of a once-every-few-years exercise aimed at figuring out what should be the direction of the U.S. particle physics program. I quote:
- The American Physical Society’s Division of Particles and Fields is pursuing a long-term planning exercise for the high-energy physics community. Its goal is to develop the community’s long-term physics aspirations. Its narrative will communicate the opportunities for discovery in high-energy physics to the broader scientific community and to the government.
They are doing so in perhaps the worst of times, when political attacks on science are growing, government cuts to science research are severe, budgets to fund the research programs of particle physicists like me have been chopped by jaw-dropping amounts (think 25% or worse, from last year’s budget to this year’s — you can thank the sequester).. and all this at a moment when the data from the Large Hadron Collider and other experiments are not yet able to point us in an obvious direction for our future research program. Intelligent particle physicists disagree on what to do next, there’s no easy way to come to consensus, and in any case Congress is likely to ignore anything we suggest. But at least I hear Minneapolis is lovely in July and August! This is the first Snowmass workshop that I have missed in a very long time, especially embarrassing since my Ph.D. thesis advisor is one of the conveners. What can I say? I wish my colleagues well…!
Meanwhile, I’d like to comment briefly on a few particle physics stories that you’ve perhaps seen in the press over recent days. I’ll cover one of them today — a measurement of a rare process which has now been officially “discovered”, though evidence for it was quite strong already last fall — and address a couple of others later in the week. After that I’ll tell you about a couple of other stories that haven’t made the popular press…
First, an update to an ongoing measurement has allowed the LHCb and CMS experiments at the Large Hadron Collider [LHC] to claim an official discovery, where previously they had only strong evidence. In question is the long-sought decay of Bs mesons (hadrons containing a bottom quark and strange anti-quark, or vice versa) to a muon and an anti-muon. [I briefly described this process when an unconfirmed claim of evidence was made some time ago.] By combining their results and using the full 2011-2012 LHC data, they have officially “discovered” it … i.e., together they have been able to exclude the absence of this decay, at the officially sanctioned level of five standard deviations.
According to the measurement, about 3 per billion (1,000,000,000) Bs mesons decay in this fashion. But note the measurement is still not very precise, as you can see in the figure below — so let’s not jump to conclusions. That “3” could still be 2 or 4 per billion, or perhaps even 1 or 5 per billion, without surprising anyone. Now, calculations using the Standard Model (the set of equations used to predict the behavior of all known particles and non-gravitational forces) predict that this number would be about 3.5 per billion, give or take. So there is agreement — rough agreement — between data and the Standard Model’s prediction.
In the press, however, you’ll see statements that this new measurement has huge implications for supersymmetry (a popular but certainly not unique speculative idea for what might resolve the hierarchy puzzle of the Standard Model), and for other speculative ideas too. [From the BBC, you’d often get the impression that particle physics is “A Battle Between The Standard Model and Supersymmetry”, as though supersymmetry is the only other game in town and no one has ever had any other interesting speculative ideas about nature. This is, of course, a tad overstated.]
Well, what lies behind these press articles is apparently a powerful hype machine operating within LHCb. It is true that this measurement is very important, and has been since it was first studied seriously at the LHC, as presented in late 2011 and early 2012. But since that time, each step forward has been relatively small; this is no surprise, since to get a major advance you generally need about 10 times as much data as you had before, and that won’t happen til 2015 or later. Yet somehow, despite small-to-medium steps forward in November 2012 and now in July 2013, the hype machine has managed to convince the BBC and other press organizations that there have been multiple major scientific advances while actually there’s been only one. You have to admire them (and to wonder why the BBC isn’t catching on.)
Let me quote some of the purple prose from the LHCb press release: “The result is a stunning success for the Standard Model of particle physics and yet another blow for those hoping for signs of new physics from CERN’s Large Hadron Collider (LHC).” WOW! Whoosh!!
Hee-hee… gosh, if every measurement with a precision of about 30% that agreed with a theoretical prediction had been hailed as a triumph for the theory, there would be a lot of embarrassed scientists around… a lot of predictions have failed only after much better precision was available. Let’s see the precision reach 3%; then we can talk about “stunning”. And a “blow” to supersymmetry and to other speculative ideas beyond the Standard Model? Didn’t we already go through this exact same story last November? While this measurement rules out a large number of variants of supersymmetry, there are many other variants which it doesn’t yet touch. And most other speculative ideas in the scientific literature survive this measurement with an even larger fraction of their variants intact.
[Many of the speculative ideas that have been discussed by particle physicists predict multiple Higgs particles. Typically the variants ruled out by this measurement are those in which at least one type of Higgs particle interacts much more strongly with bottom quarks than does the single Higgs particle present in the Standard Model.]
In short — the recent news is a step forward, but not at a level that would justify the hype. There isn’t that much of a change from November, and in any case, you typically can’t rule out any one speculative idea with a single measurement. Constraints on speculative ideas come from combining many different measurements being made at the LHC, and elsewhere. So — let us express our congratulations to LHCb and CMS for their joint discovery, and also, in a way, to LHCb’s press office for its … skills … regarding journalists and other non-experts.
[By the way, the BBC article contains an error. “Scientists have confirmed one of the rarest phenomena of decay in particle physics, found about three times in every billion collisions at the LHCb.” No, it’s not 3 times in every billion proton-proton collisions at LHCb; it’s 3 out of every billion Bs mesons, which are certainly not produced in every proton-proton collision.]
50 Responses
I need to to thank you for this great read!! I definitely enjoyed every bit of it.
I’ve got you bookmarked to check out new stuff you post…
I am not sure if makes sense in a physical theory is more “likely to be false than true”. For a start it seems to assume that reality can be given an exact mathematical description – which is by no means self-evident. But in any case these sort of considerations belong to philosophy rather than physics. However, I think it does make sense to say that one physical theory is more likely to survive than another, so would you agree that of all existing alternatives (or extensions) of the Standard Model, String Theory is, at present still the best bet? Do you think there is a need for a “unified theory” theory? Also, if you do not mind answering a somewhat personal question, do you still intend to work on topics in string theory and why?
I should perhaps add that I am a mathematician and I have recently become interested in string theory hoping to understand why it gives insight into mirror symmetry. As a mathematician I must say I find it hard to imagine that a “false” physical theory could provide such correct mathematical intuition. I don’t think there has ever been any such case in the past.
You are right to criticize my glib language here; I wasn’t being careful. I do mean mainly that history is never kind to theoretical ideas; when there are competing ideas, it usually turns out that all of them are inconsistent with nature, or one of them (at best) is consistent. Sometimes it turns out more than one is consistent, in a surprising combination. But it is unwise to get into the habit of thinking that because theorists have thought of a good idea, it is likely to apply to nature.
String theory is still the best bet among theories that humans have invented **so far**. I’m not as certain as some of my colleagues that we’ve thought of all the possibilities. And string theory has so many manifestations that I’m not sure that the one that’s relevant in our universe has been imagined yet. But who cares what I think? Nature is nature… so don’t pay any special attention to me.
I have no qualms about working on string theory when it answers questions of interest to me. I view it as a tool. I think, as a mathematician, you should view it the same way.
There are many examples of “false” physical theories that provide enormous physical and mathematical insight. The most famous examples are the maximally-supersymmetric and near-maximally-supersymmetric quantum field theories (so-called N=4 and N=2 supersymmetry.) These theories cannot have the chiral symmetries of the quarks and leptons in nature, so we know they are “wrong” (in the sense of not being able to describe nature.) Another example is the theory that contains only gluons and no quarks (often called “Yang-Mills theory”) and its N=1 supersymmetric version; we know these theories aren’t “true”. And yet, we have gained enormous insights into all sorts of physical processes and calculational techniques using these “false” theories. The same is true for their contributions to math: at least one Fields medal and many other awards have been won and will be won soon using the N=4 and N=2 theories. So I think you are suffering from belief in the same myth that many others suffer from: it is simply not the case, in history or in the present, that the most elegant and symmetric theories are the ones that we find in nature.
Late to the party, so don’t mind me especially.
But I happen to have a hard time to understand how a tool that can predict black hole entropy correctly (AFAIK), and did early predict some apparently useful “flux tube” model of QCDsituations before it could be derived from QCD (AFAIK), can be an entirely mathematical theory.
What I don’t get is how it can yield useful predictions solely based on mathematics. Too many options, too little constrains, as you imply.
The closest I can think of is how sometimes one can derive a completely mathematical measure of “energy” in some differential equation models. Maybe I should revisit those cases and see how physics seemingly appears out of mathematics. (Maybe it is the few constraints that provide the necessary observational context after all?)
1) You should remember that the black hole entropy that was calculated in string theory had also been calculated in mathematical black holes. No one has actually **measured** black hole entropy. And no one knows that the black holes in our universe get their entropy the way they do in string theory. All we learn is that string theory has quantum mechanically consistent black holes; we do not learn our universe has *those* black holes and not others. String theory is a very useful tool for understanding black holes, but that doesn’t mean that black holes of our universe work precisely the way string theory says.
2) The flux tube “model” is precisely what I’m talking about. It is a “model” for flux tubes in QCD — a tool for clarifying difficult questions in QCD. It is imperfect and gives imperfect answers; it gives qualitatively sensible answers, but not quantitatively correct ones. In short: string theory can be used to study flux tubes that are similar to those in QCD; but the QCD flux tubes that we actually find in nature are somewhat different.
Being a tool is a good thing. I think string theory is extremely important as a tool. I don’t think it should be sold so often as a “theory of everything” [i.e., a theory precisely describing all the known and unknown particles and forces] because we don’t have any way of checking whether it is the right theory or not.
This should be a reply to the last comment above directed at me …
I liked the previous article explaining how string theory was useful to do important calculations in the context of the Blackhat program a lot (not so much the corresponding comment discussion …), and I would naturally look forward to a new article explaining how such ideas don’t have to be right (in the sense of “true in nature”) to be profoundly important to understand different things.
But then, from reading the recent comments here concerning these issues, the conclusion seems to be that most of the current BSM physics concepts are most probably wrong anyway, and string theory (together with suppersymmetry which it needs) is at most a cool mathematical trick (similar to a transformation to a more conveniant coordinat system) to simplify calculations occuring in the context of of various problems, such that only people who really work on such topics should be interested in these things. But others better forget about and should not be (too) interested in such things … ?
For example I quite liked the corresponding articles explaining BSM physics ideas here on this site, but now I wonder what was the purpose of explaining such things to a non-expert audience …
Three main purposes…
1) These particular speculative ideas get a lot of press, so it is helpful to my readers for me to give an explanation of what all the buzz is about.
2) One of these ideas (or something like them) might have shown up, and may still show up, in the LHC data. Those who say that the LHC will see only the Standard Model are as unreasonably pessimistic as those who were confident supersymmetry would appear were unreasonably optimistic.
3) The ideas that I covered thoroughly have staying power; supersymmetry and extra dimensions will remain important in particle physics, as will string theory, into the next few decades at least, because of their power as tools for understanding things that really are part of nature.
Note also that it was guaranteed that most “BSM” (Beyond-the-Standard-Model) ideas would be wrong. There are hundreds of ideas and only one nature. Indeed it is quite likely that all of those ideas are wrong. Humans are not very smart, you know… it can easily take us a couple of decades, even collectively, to recognize something which in retrospect was obvious.
More generally, thinking outside the box of the universe we know is a way of both guessing what secrets nature might be hiding and appreciating nature as it is. So BSM thinking is an important part of understanding the SM. Of course it is also important for solving the unsolved problems of the SM and of gravity. And it is in particular important for suggesting experiments to do, ones that might reveal presence of a speculative idea that someone has thought of, or for revealing something no one ever thought of.
When the underlying event you are estimating has a frequency of 3 parts per billion, coming within 30% of the predicted value of something so rare that you might not even bother to look for if you hadn’t had a theory to point you in the right direction means a lot more than it would otherwise. Simply getting the order of magnitude of such a rare and obscure process is impressive. Also, the discovery is notable for being a confirmation of a prediction made decades earlier and not modified in the meantime, rather than a post-diction or a prediction made just a year or two before the result was in after early data pointed you in the right direction about how to tweak your theory so that it predicted the right thing.
I’d rate the find on a par with the prediction of the existence Pluto from data on the movement of the solar systems other bodies. It may not have been a terribly precise prediction, but it was a purely blind one of an extremely slight effect made solely using theoretical tool devised long before this application was suggested without any prediction specific parameter setting.
I’m not saying there’s nothing impressive about it. But what LHCb’s press office said implied it is stunning *confirmation* of the Standard Model. That’s not the case; it is stunning confirmation of quantum field theory and of the general outlines of the Standard Model. But it certainly doesn’t imply the Standard Model is anywhere near the complete story at the LHC. To do that would require accuracy and precision at least 5 times better.
“I’d rate the find on a par with the prediction of the existence Pluto from data on the movement of the solar systems other bodies. It may not have been a terribly precise prediction, but it was a purely blind one of an extremely slight effect made solely using theoretical tool devised long before this application was suggested without any prediction specific parameter setting.”
Don’t you mean the prediction of Neptune? The prediction of Pluto is an urban myth, since there is no longer any orbit discrepancies to make such a prediction out of:
“In 1978, the discovery of Pluto’s moon Charon allowed the measurement of Pluto’s mass for the first time. Its mass, roughly 0.2% that of the Earth, was far too small to account for the discrepancies in the orbit of Uranus. Subsequent searches for an alternative Planet X, notably by Robert Sutton Harrington,[54] failed. In 1992, Myles Standish used data from Voyager 2’s 1989 flyby of Neptune, which had revised the planet’s total mass downward by 0.5%, to recalculate its gravitational effect on Uranus. With the new figures added in, the discrepancies, and with them the need for a Planet X, vanished.[55] Today, the majority of scientists agree that Planet X, as Lowell defined it, does not exist.” [ http://en.wikipedia.org/wiki/Pluto ]
Thanks for this again very nice, interesting, article 🙂
but there is again one issue which bugs me, that I have already noted in another generally very nice article about the physics and production mechanisms of gamma ray bursts. Maybe it is just because I am not a native English speaker, but to me it seems adjectives or nouns like “speculative” or “speculation” have a very strong negative connotation. From my natural, maybe wrong or too naive speech comprehension, saying that somtheing is “speculative”, a “speculation”, etc is in its most negative interpretation quivalent to saying that this something is bullshit, rubbish, nonsense, crap, a crazy idea without any reasonably justified motivation backing it up, etc and in the most good willing interpretation it has the meaning of an idle philosophical idea or metaphysics, which is not legitimate physics either. Calling something speculative or a speculation gives the impression that the something is completely worthless.
So my question here is: is it really needed that theoretical concepts such as supersymmetry or not yet settled down explanatory ideas how high energy gamma ray bursts are produced for example (neither of them is just worthless unmotivated nonsense!), is consequently paired with the adjective “speculative” or with the noun “speculation” repeatedly each time it is mentioned, sometimes even more than once in the same sentence?
Why can alternatively, theoretical ideas and concepts which are not (yet?) directly experimentally confirmed and/or settled down not just be called “theoretical ideas” and that’s it? I think many regular readers of this site have a good enough idea about which theoretical ideas and concepts are settled down and which issues are still some kind of open. And even for people who have not (yet) a large physics knowledge of their own, such that they can not judge this issue completely on their own, it would be enough to call a theoretical concept or idea “speculative” or a “speculation” at most once in the same article, for example when it is mentioned first. Surely none of the readers here are that forgetful, that they have to be reminded with such a high repetition frequency of the fact that something is not yet settled down …
Maybe it is just me, but reading texts with such a high repetition frequency of “speculative” adjectives and nouns, used to characterize by many physicists considered not completely nonsensical things, strikes me quite odd …
….
Using the “speculative” terminology in this excessive and highly repetitive way for any theoretical ideas and concepts as they appear in different subfields of physics, conveys to the public and/or laypeople audience the wrong and somehow misleading impression that physicists, who are interested in and working on rather theoretical aspects of these physics topics (including those who try to suggest phenomenological predictions), are people who do nothing else than pulling wild ideas out of their sleeve or out of thin air without any scientific reasonable motivation or justification.
People working on and intereste in the rather theoretical part of physics are not some lazy good-for-nothing who peacefully lay in the shadow of a tree the whole day and idly make up some crazy ideas … 😉
This was a decision made on my part after considerable reflection.
I’d rather have my readers understand that string theory, supersymmetry, and extra dimensions are more likely to be false than true. And that’s what the word “speculative” is meant to imply.
This is to counter the impression given by many proponents of these ideas that they are more likely to be true than false… or even that they’re obviously true and it’s just a matter of little experimental details.
“String theory tells us the world has ten or eleven dimensions…”
It’s poisonous quotations like that which need an antidote. String theory remains a fascinating, brilliant, elegant, exciting ***speculation***. What makes it better than *idle* speculation? First, it has a complete, well-defined set of mathematical equations that make it meaningful. That’s what makes it a scientific theory. And those mathematical equations address key questions of how quantum mechanics and gravity, as well as other particles and forces of nature, could coexist in one framework. That’s what makes it an interesting scientific theory.
It’s my job to convey the distinction between idle speculation and a real scientific speculation. I’d use the word “theory” but that word has so many meanings, and is so toxic and confused in modern parlance, that it is useless.
Thanks for these explanations,
now I think I understand your intention…
However, my challenging of the “speculative terminology” applied to theortical ideas, explanations and concepts was however meant in a much broader context than just string theory (I did not even mention that in my comment 😀 …), but I was rather surprised that you apply it very broadly and generally to the theoretical part of all physics topics, even in the context of more “down to earth” fields such as astrophysics and the gamma ray bursts for example.
Your mission of making it clear to people that current BSM fundamental physics theoretical ideas, are most probably rather wrong than true, seems to be successfull since quite some time ago. Indeed it was people like Peter Woit who started this using even more to the point language and arguments …
The success of the mission of Peter Woit, Alexander Unzicker, and other people who think in the same way persue, can be seen for example in the increasing number of physics students at different levels (even grad students) we have at Physics SE http://physics.stackexchange.com/ (a physics question and answer site initially targetted at academics, researchers, and students of physics and astronomy) who think that thinking about or even persuing theoretical fundamental physics such as targetted at by the FFP for example, is no longer a good idea and worthwhile.
Not sure if it is this what you want to achieve, but if so it obviously works, with or without the speculative terminology on this site … 😉
Oh, I fully agree Peter Woit has done an incredible amount of damage to theoretical physics. That’s because all he’s done is tear things down, throwing out the baby with the bathwater, and what he offers in its place is, in my view, terribly naive. That’s natural: he, unlike me, is a polemicist. But to compare what he has done to what I am doing (let me remind you of this article http://profmattstrassler.com/2012/08/15/from-string-theory-to-the-large-hadron-collider/) is not appropriate. I can’t help it if people don’t make the distinction between me and Woit, but you’re the first person ever to suggest that he and I bear serious resemblance.
Meanwhile, if there is *SOME* backlash against what some of the string theorists have done, encouraging all those young people to strive to be the *mythical* Einstein (who spent all his time thinking deep theoretical thoughts, of course), and the backlash drives young people to focus more on proposing and explaining experiments (like the *real* Einstein), that would be a good thing for science. I make no apologies if I accomplish this.
But I will add: every theorist should learn supersymmetry, extra dimensions and string theory. These ideas don’t have to be right (in the sense of “true in nature”) to be profoundly important to our understanding of nature, in particular, of quantum field theory and of gravity. In fact I was writing an article about that – it will take some time to get the language clear. This is where the outsider polemicists, like Woit, get it all wrong.
Matt: “ … encouraging all those young people to strive to be the *mythical* Einstein (who spent all his time thinking deep theoretical thoughts, of course), and the backlash drives young people to focus more on proposing and explaining experiments (like the *real* Einstein), that would be a good thing for science.”
I kind of understand your saying. Let me paraphrase it. If I got it wrong, please correct me.
This is very much to do with linguistics; too many terms are over used and become toxic. For example, what is “theoretical physics”?
a. A theoretical framework which is “based” on “quantum principle and relativities” is not consistent as the two in the base are not compatible. That is, there is no true theoretical system based on these two pillars of physics.
b. The Standard Model has no “theoretical base” but is a 100% hodgepodge of the test data. The equations of the model are the “best fit” mathematic formulas for the data, and many parameters in the equations are “put in” by hand, not from any theoretical base. That is, the Standard Model is just a “bookkeeping”, not a “theory” per se.
c. Being not a true theory, all “predictions” in the Standard Model have no theoretical base but is from the result of that “there is a ‘piece’ missing” or that “it must have this mass in order to balance the book”. Thus, the term “prediction” is now no longer connecting to any theoretical work. Furthermore, this kind of “bookkeeping” can be tweaked, and there comes the “post-diction” which further poisons the term “prediction”.
d. For a true theoretical framework in physics, it should be an axiom-system, with a base (definitions, axioms and procedures) and a set of consequences (sentences and theorems). As the term “prediction” is now badly contaminated, any theoretical framework will no long produces predictions but have “consequences” which are delinked from the “pre- or post-“ of any data point. When a “consequence” is verified by a data set of 100 years old, it is still a “pre-“diction for that data set.
Both M-theory and SUSY are true physics theories (theoretical frameworks). But, they are in two different “levels”. If we use the Standard Model as the “reference” point, there are two “types” of theoretical framework (not using “theory” any more as it is badly poisoned).
i. Type I: “above” SM framework, that is, its base contains the Standard Model. Of course, its consequences should not reproduce the SM. That is, no existing “data” (not knowledge) can judge its validity. A new data is needed for this Type I framework.
ii. Type II: “below” (beneath) SM framework, that is, its base does not contain the Standard Model. Then, its “major” mission is to “reproduce” the SM. If it succeeds, it is a valid framework. Otherwise, it is a failed trash. Thus, the “pre-“diction (consequence) of this Type II can be verified by all kind of “old” data sets.
By definition, the base of an (any) axiomatic-framework can be chosen arbitrary, that is, it is not subject to any testing. Its validity hinges on two points.
1. As a mathematical construct — it must be mathematically consistent.
2. As a theoretical “physics” framework — its “consequences (not the base)” must make “contacts” with the “known” physics.
If your *mythical* is about a construct which does not make any contact to the reality while *real* denotes a connection, then I now understand your saying.
“These ideas don’t have to be right (in the sense of “true in nature”) to be profoundly important to our understanding of nature, in particular, of quantum field theory and of gravity. In fact I was writing an article about that – it will take some time to get the language clear.”
This sounds interesting; looking forward to it!
Matt: “I’d rather have my readers understand that string theory, supersymmetry, and extra dimensions are more likely to be false than true. And that’s what the word “speculative” is meant to imply.
This is to counter the impression given by many proponents of these ideas that they are more likely to be true than false… or even that they’re obviously true and it’s just a matter of little experimental details.”
I am definitely following your leadership on this but will go one step further, formally saying that both M-theory and SUSY (with s-particles) are wrong. Of course, I must not say this with the tongue in cheek but must use some concrete “material” evidences. First, let the Standard Model be the reference point.
a. A theoretical framework (such as M-theory) which has a base beneath the SM, its key mission must be the reproduction the SM. By failing this mission, it is a failed contraption. Of course, if no one can succeed, then it can still beg for a last chance. But, if there is a success, no more excuse should be given.
Do we have one success around? We do not need a LHC or a new machine to get the answer. A Google “search” can give us an answer in a fraction of a second. G-strings use the different “language” (different from Standard Model language) to “spell-out” all 48 SM particles. That is, the answer was not put in its base. And its “spelling” can be proofread checked by a 5th grader who knows no physics. This proofread-check is a concrete material evidence.
b. A theoretical framework (such as SUSY (with s-particle)) which has a base above SM, its key mission is going beyond the SM, and all data of SM is not able to check its validity. But, if all the “beyond-missions” are easily accomplished by another pathway, then SUSY should be cut out by the Occam’s razor. There are at least three concrete material evidences for cutting out SUSY.
1. There is another pathway to accomplish those missions, thus cutting out by Occam’s razor. One example is about the “dark matter” which could be wholly accounted for internally by the SM-sphere (soccer-ball model) without needing one additional SUSY-sphere. Some theoretical arguments on this can also be searched with Google machine.
2. If SUSY is totally disjoint from this universe, it is then totally irrelevant to this universe. If it is linked to this universe (even very weakly), it still must make a contact with the SM-sphere. A solid theoretical argument can be presented as the concrete material evidence that that “contact-point” must be at the “weak-scale”. But, not only is there no “contact” at the weak scale, but the LHC data shows that no sign of any linkage to the SM-sphere below 1 Tev. which is much higher than the weak scale already.
3. The recent LHC B-meson decay data also rules out any SUSY-“hidden”-linkage to this SM-sphere.
With these three concrete material evidences, I am very comfortably saying that SUSY (with s-particle) is wrong.
It’s welcome to see a blog where everybody’s opinions can be heard without calling each other wrong headed idiots or some far more interesting names. Just thought I’d say that after previewing some of the other online Physics discussions, which will remain nameless and me less. Professionals in the Physics community really have to work together for better relations from government as well as the public.
I suppose some people enjoy insulting other people, and have blogs to give themselves a nice public opportunity to do so.
It’s very sad about the funding cuts. And what’s even sadder is that unless something changes, it’s only going to get worse – in part because public and politicians are growing increasingly disillusioned with HEP and its relevance to the modern world. Sean Carroll blogged on issues in cosmology under the heading “Talking back to your elders”. IMHO these issues apply to HEP, but in spades. And the tragedy is that standard-model particle physicists will not accept input from other models/fields that will fix the problem. It’s like watching a long slow starvation in front of a banquet.
“the tragedy is that standard-model particle physicists will not accept input from other models/fields that will fix the problem”
I don’t think you know what you’re talking about.
Oh, you know — all those standard-model particle physicists going around insisting neutrinos are massless and gravity doesn’t exist.
Exactly! Not to mention the ones who insist there can’t be any dark matter or dark energy…
With respect Matt, I think I do. Sadly I also think that the tone of your response vindicates my point.
There are reports going around of “new physics” predictions. Are these misreportings of the Bs decay rate story or something different? Eg
http://phys.org/news/2013-07-experimental-physics-standard.html
I am glad you raised that one. I would like Matt’s thoughts about that too. If you look through the paper on arXiv (1307.5683v2) however it does pull back from 4.5 sigma to just 3.7 sigma (if I understand it correctly). I suspect Matt will say its just a bump, nothing to get excited about and it will disappear with more data. 🙂
The paper you quote is a paper by THEORISTS, it is not itself experimental data, but rather interpretation of experimental data, so I am sure their *claim* will not go away. But clearly these people are trying to make themselves famous by getting news reports about their paper. Notice their paper has not been double-checked or peer-reviewed, so we don’t even know it is correct. I would be patient.
That’s separate from the question of whether the data that they are trying to explain through their paper is actually correct. That data is not about the process B_s –> mu+ mu- (which is the one I wrote about today) but about the very different process B_s –> Kaon* mu+ mu- [where a Kaon is a meson that contains a strange quark and a down anti-quark, or vice versa, and which has spin 1, decaying further to a Kaon of spin zero and a pion.]
Matt, thank you – Even to my relative layman’s eye, it seemed terribly confused what they were trying to ride off/announce, but they seem to be getting almost as much coverage as the main LHC “in line with SM” announcements, so their strategy looks to have been successful…
Now, about the Neutrino Oscillation confirmations, why aren’t they getting more coverage, that is another serious piece filled in!
This is something different. I don’t yet know what this is about, but the articles are written in a disturbingly unscientific fashion. This is a grab for attention by the theorists involved, clearly.
Tommaso Dorigo has a post on this
http://www.science20.com/quantum_diaries_survivor/foursigma_evidence_new_physics_rare_b_decays_found_lhcb_and_its_interpretation-117058
It should be pointed out that LHCb has reported a deviation, so it’s not just these two theorists making grand claims.
“LHCb in a talk given at EPS by Nicola Serra, quantified the discrepancy as a 3.7 standard deviations effect, and – taking into account the fact that many other bins and other variables had been studied, quantified the effect as a p=0.005 one – a “2.8 sigma effect”.”
No – you’re missing the point. The same theorists did the one and only calculation which disagrees with LHCb. So there’s only one group claiming a disagreement between data and theory — LHCb presents their data, and these three theorists’ calculation disagrees with that data. And that’s exactly what Serra said; read the talk.
In other words:
LHCb says: we disagree with these theorists, and if they’re right, that means we disagree with the Standard Model
and the theorists say: we calculated what’s in the Standard Model, and our calculation disagrees with the data from LHCb
So there is no question that LHCb disagrees with these theorists. But is the calculation by these theorists correct? Are the uncertainties correctly estimated? As far as I know, there has been no independent check of their work. These are not easy calculations, and estimating uncertainties is somewhere between art and science…
Ok, I didn’t realize the same theorists that calculated the expectation are the ones now talking about the deviation.
Matt: “While this measurement rules out a large number of variants of supersymmetry, there are many other variants which it doesn’t yet touch.”
You are of course exactly correct about this new result. But, SUSY (with s-particle) can be viewed from a different angle, the old grandmother.
Of course, the old grandmother has no ability to know whether the SUSY (s-particle) is correct (adapted by nature) or not. But, she can easily see that SUSY (s-particle) is a stupid idea.
Being not able to understand the deep definition for “symmetry” in physics, she can understand it with an easier language. Let one ball (perfect sphere) rotating on its center and a very fine laser beam hits the ball while the beam reflects on a screen. During the rotation, if the laser dot on the screen does not move, then all those points which are swept by the beam are symmetrical somehow.
a. For a perfect ball, the laser dot on the screen will never move. She can describe this in two ways,
i. infinite degree of symmetry,
ii. zero (0) point symmetry break.
b. If that perfect ball is punched and has a needle-hole, then she can find, at least, one occasion that the laser dot will move. That is, the infinite degree of symmetry is no more, and there is at least one (1) symmetry break.
c. If a few ditches are scratched on the ball surface (forming a soccer ball like pattern), she will notice that the “symmetry break” increases. That is, she will conclude that more patches on the ball the lesser degrees of symmetry.
For the Standard Model, it has 48 elementary particles which can be described as a (4 x 4 x 3) cube. And, this cube can be represented as a patched ball. If these 48 particles can be represented by a code of (3 x 3 x 2) cube, then this new cube has higher degrees of symmetry. On the other hand, if we double the 48 to 96, then the degrees of symmetry will be reduced.
With this understanding, the old grandmother cannot see that the SUSY (with s-particle) can be a higher symmetry. It should be a lower symmetry.
If this SUSY (s-particle) is not placed on the original ball but is on a new ball, then the entire system becomes a dumbbell which has very low symmetry from any “stand-point”. In fact, there is no way to convince the grandmother that SUSY (with s-particle) has higher degrees of symmetry than without it.
Well, I will not try to convince the grandmother. I know that SUSY (s-particle) is wrong, as there is no room for SUSY (with s-particle) in G-strings.
Instead of making a comment here, Luboš Motl wrote a negative comment about this post at his own blog, mainly about Matt’s usage of the word “speculation”: “Matt’s adjective “speculative” makes SUSY – and all BSM physics research – sound like a speculative, philosophical, futuristic research or a research of unhinged crackpots. It’s none of these things. … Supersymmetry is primarily a symmetry whose restoration seems inevitable at the Planck scale or lower in consistent theories of quantum gravity. It is also a symmetry that produces the most natural dark-matter candidates we know in literature.”
Of course, Matt can choose to rebut that comment or not. But, I strongly agree with Matt’s usage of that word. Furthermore, I would like to refute Motl’s comment that SUSY (with s-particles) is absolutely necessary for the issues of dark-matter and hierarchy problem. This issue can be discussed clearly with a grandmother/boy dialogue.
Boy: grandmother, someone says that SUSY (with s-particles) is absolutely needed for dark-matter and the hierarchy problem. If your soccer-ball/laser-beam model cannot address these two issues, you must be wrong.
Grandmother: Exactly. The dark matter is about (5.8571428 – w) times the visible matter from this soccer-ball “calculation”.
Boy: Planck data is now public, and it shows (25.8/4.82) = 5.3526 times. Are you just making one up with this known data? By the way, I am not good at the math at all. Thus, any complicate calculation will definitely go over my head. Just tell me some reasons behind the calculation.
Grandmother: No, not complicate calculation, just counting fingers. But, we first must get the “language” correct, a linguistic issue, you know.
A = matter/anti-matter symmetry
B = baryongenesis (matter/anti-matter symmetry-breaking)
Which one (A or B) has higher “degrees” of symmetry?
Boy: Come on! Of course, A has higher degrees of symmetry.
Grandmother: Indeed, this is “the” problem. Boy, you are wrong. Please see the following points.
a. If the ball is a “perfect” sphere, the laser dot will not move forever. Thus, it has “infinite” degrees of symmetry.
b. If there is one pin-hole on the ball, the laser dot will “eventually” make a jump (symmetry break). However big this “eventually” is, it is finite. So, it has much less degrees of symmetry than the first case.
c. If there are 24 pin-holes (patches) on the ball, the laser dot will jump (symmetry break) Y-times.
d. If there are 48 pin-holes (patches) on the ball, the laser dot will jump (symmetry break) Z-times.
It is very obvious that Z is larger than Y. Thus, the case c has higher “degrees” of symmetry than the case d. So, B (baryongenesis) must have higher degrees of symmetry than A (matter/anti-matter symmetry).
Boy: So what! What the heck is symmetry good for anyway! Why should it become a big deal, especially for the nature laws?
Grandmother: All is about the laziness. For example, there are 81 equations in the multiplication table. But, 5 x 8 = 8 x 5, thus, we need only memorize one-half (about 40) of these equations. So, symmetry is all about memory-management; the higher the symmetry, the lesser the memory-energy is needed. The universe is very huge and complicate while the Nature is very lazy. So, Nature plays this symmetry game, as it needs only to hold on one leash to control all. You kids can roam free but obey the symmetrical rules. So, nature’s kids went out the world which is demarcated with many symmetrical-ditches.
Boy: Okay, okay. So, the ball is demarcated with symmetrical-ditches. What does this get to do with the dark-matter?
Grandmother: In order to increase the degrees of symmetry (tighten the leash), many ditches are filled while the lands are still owned by the symmetry-claimers. So, the laser dot will not move when it goes over those demarcations. That is, those lands become “dark-lands”, no laser-jumps (no manifested “particles [or ditches]”). Yet, while they turn dark (not manifested as particles), they still take up the land, still having the landmass, no difference from the visible particles in terms of the landmass. For the 48 SM particle-land-patches, 40 of them are dark-lands. Electron-neutrino is also dark. So, the total dark-lands are 41, that is, the dark/visible ratio is [(41/7 = 5.8571428) – w].
Boy: Why “-w”?
Grandmother: The playing of these visible-kids is often getting out-of-the-bound into the dark-lands, and it causes some sparks there. According to the AMS-2 data, the out-of-the-bound sparks account about (8 to 10%) of the dark-landmass. Thus, [41 x (100 – 9)% /7 = 5.33] is almost identical to the Planck data [(25.8/4.82) = 5.3526].
Boy: Wow! You are really lucky and can make up numbers.
Grandmother: Well, the Alpha equation uses the same soccer-ball calculation.
Boy: How about hierarchy problem?
Grandmother: Not today. But, there are no “fundamental” particles beyond the SM 48 particles on this soccer-ball, from here all the way to “the … energy”.
Sorry to hear about those budget cuts. Maybe a new discoveries related to antimatter would change things for the better? On the other hand budget cuts release brain power for a new projects and adventures.
You can say a lot. Your work here is much more influential than that workshop, as I (at least) do not truly give a hoot about it. The chance for it to get any heavy duty job done is not very good. It did not do much in the past and won’t do much in the future, neither.
In addition to the big gadget (such as LHC), physics can be done with paper and pencil. In the Phil Gibbs’ blog (http://blog.vixra.org/2013/07/18/naturally-unnatural/#comment-33919 ), a new methodology of physics (beauty-contest) was discussed. In a sense, we just had a Sunnymass workshop there.
Great to see you are still thinking about your Particular Significance followers and hope everything is going well for you Matt. I will just repeat the comment I made to Pauline Gagnon on her latest USLHC blog entry:
I find the short green bar labelled SM on that LHCb plot very confusing, I assume it is just a legend trying to indicate that the long green bar is the Standard Model prediction but LHCb could have at least put the legend in a box to reduce confusion. At first sight it looks as if it is some competing theory predicting a rate of around 5.4e-9 .
Pauline has since kindly amended that plot on her blog and it looks a lot less confusing (at least to me!).
Matt says: “(think 25% or worse, from last year’s budget to this year’s — you can thank the sequester)..”
Question: the sequester is a 6% across the board cut to non-defense discretionary spending. How do you get 25% or worse out of that?
On another note, are you going to blog on the recent neutrion experiment done in Japan?
1) simple exercise in government 101. Organization a funds org b which funds org c. Each has fixed costs and discretionary costs; only the latter can be cut. Therefore it follows (since funding of other organizations is discretionary) that the cut to org b is larger than the cut to org a, and the cut to org c is larger than the cut to org b. If you really think that an across the board cut of 6% really means that every non-profit takes a cut of 6%, you should go back to school. Or do you really think every govt employee got a pay cut of 6% and that private health insurance premiums and legal fees were generously cut by 6% to keep govt agencies happy?
2) yes.
Actually, since many government employees are being furloughed for some period of time, they are indeed taking a pay cut. Granted, that does not affect overhead costs but it does show that fixed costs are not totally fixed.
Also, to get to 25% from 6% would mean overhead rates would need to be VERY high. In your example, if A got $100 million and C got $24 million out of it for actual research, then a $6 million cut would indeed mean 25% less for research (assuming A and B did not cut any spending). Is our science funding that inefficient? If so, then it would be far better to address the $76 million than the $6 million lost to sequester.
I’m not saying you’re wrong, just that I’m skeptical. Can you say which funding organizations we are talking about (DOE, NSF, ect) and which line items where reduced? What and how much are these fixed costs that are not producing any scientific output?
Unfortunately I am *not* yet able to determine why the cuts are so severe. Part of this may be decisions within the DOE and the NSF (which I know most about, and where I know the numbers are as bad as what I described) to pull the money from theoretical particle physics in order to preserve certain experimental facilities or projects (within and/or outside particle physics) since you can’t cut experimental facilities and projects so easily. (Why not? — First, because they have larger fixed costs! e.g. contracts with industry. So cuts often end up increasing costs, not decreasing them. Second, because sometimes it is all or nothing: either you fund a $10 million project, or you don’t. You can’t cut it by 6%; 94% percent of an accelerator or telescope is no better than 0%.)
Concerning the high rate of overhead — this is true across government. To deal with this requires legal reform, health care reform, campaign finance reform, etc. The sequester does not deal with it at all. And that’s why the sequester is so bad: money is cut from the good things, and nothing is done about the bad things.
Yes its tough! With never enough money to cover everything. But what frustrates me is claims for more funds to replicate existing trials. We may have to await a wild card then new areas of research will open up like magic. Such as the NASA detector planned for launch 10 years from now to determine whether the universe is a field of 1 dimension. I believe this could be the case and wrote a brief essay about this notion. E.g When a swallow transits from S Africa to France to lay eggs in the nest it built in the previous year. I reckon as far as it is concerned moves through just 1 dimension – the airspace between the 2 countries. But of course it does not know of the teachings of Euclid and that it actually moves through 3!
It’s kind of annoying how badly the BBC does at Science articles especially in particle Physics. 🙁
Also I have a question: do you think we will ever reach a point where Scientists will abandon super symmetry altogether, if its variants continue to be ruled out. Or will it always have its advocates for the foreseeable future?
Supersymmetry as relevant to physics at the LHC — as relevant to the hierarchy problem — will have fewer and fewer advocates if nothing else shows up at the LHC. There’s no good reason for the number to drop to zero, since supersymmetry will still be a good idea, but the number will be much smaller. Part of the reason is that many people (like me) are not advocates for anything in particular, and the number will grow.
Of course if something clearly different from supersymmetry appears at the LHC, the number of advocates will plummet. This is what happened to technicolor, which predicts no observable Higgs particle.
But the baby must not be thrown out with the bathwater. Remember that supersymmetry
a) might apply to nature at energy scales unreachable by the LHC, and
b) is a powerful mathematical, technical and conceptual tool, useful in understanding quantum fields and gravity [which are part of nature] even if it is not directly a part of nature.
For these reasons, particle physicists can and should continue to learn the basic mathematics and results of supersymmetry; I would never advise a student not to learn it, even if never shows up in an experiment. Same with extra dimensions and string theory; these are powerful tools of great value, even if they have nothing to do with nature directly. Technicolor, too, was a good idea, and it illustrates key conceptual points about how the Standard Model behaves; every expert should know the basic idea, even though it’s not how nature works.