Of Particular Significance

BICEP2: New Evidence Of Cosmic Inflation!

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/17/2014

[For your reference if you can’t follow this post: My History of the Universe, and a primer to help you understand what’s going on today.]

I’m still updating this post as more information comes in and as I understand more of what’s in the BICEP2 paper and data. Talking to and listening to experts, I’d describe the mood as cautiously optimistic; some people are worried about certain weird features of the data, while others seem less concerned about them… typical when a new discovery is claimed.  I’m disturbed that the media is declaring victory before the scientific community is ready to.  That didn’t happen with the Higgs discovery, where the media was, wisely, far more patient.

The Main Data

Here’s BICEP2’s data!  The black dots at the bottom of this figure, showing evidence of B-mode polarization both at small scales (“Multipole” >> 100, where it is due to gravitational lensing of E-mode polarization) and at large scales (“Multipole” << 100, where it is potentially due to gravitational waves from a period of cosmic inflation preceding the Hot Big Bang.) All the other dots on the figure are from other experiments, including the original BICEP, which only put upper bounds on how big the B-mode polarization could be.  So all the rest of the points are previous non-detections.

From the BICEP2 paper.
From the BICEP2 paper, showing the power in B-mode polarization as a function of scale on the sky (“Multipole”).  Small multipole is large scale (and possibly due to gravitational waves) and large multiple is small scale (and due to gravitational lensing of E-mode polarization.)   The black dots are BICEP2’s detection; all other points are non-detections by previous experiments.  (Earlier discoveries of B-mode polarization at large Multipole are, for some reason, not shown on this plot.)  The leftmost 3 or 4 points are the ones that give evidence for B-mode polarization from cosmic effects, and therefore possibly for gravitational waves at early times, and therefore, possibly, for cosmic inflation preceding the Hot Big Bang!

Note: for some reason, they do not show the detection of B-modes at small scales, due to lensing, by the South Pole Telescope (SPT) and POLARBEAR.

NOTE: DESPITE WHAT MANY IN THE MEDIA ARE SAYING, THIS IS NOT THE FIRST INDIRECT DISCOVERY OF GRAVITATIONAL WAVES (AND THEREFORE A TRIUMPH OF EINSTEIN’S THEORY OF RELATIVITY.)   (The first indirect discovery of gravitational waves was decades ago and won the 1993 Nobel Prize. [Some are arguing that this detection is more direct; ok… I agree, it is.  Not as direct as LIGO would be though.])  IT WOULD POTENTIALLY REPRESENT A TRIUMPH FOR THE THEORY OF INFLATION, WHICH USES EINSTEIN’S THEORY, BUT REALLY IS A SUCCESS FOR 1970s-80s PHYSICISTS — PEOPLE LIKE STAROBINSKY, GUTH, LINDE, STEINHARDT… NOT EINSTEIN.

The claim that BICEP2 makes is that their measurement is 5.2 standard deviations (or “sigma”s) inconsistent with zero B-mode polarization on the large scales (small Multipoles).  That’s normally enough to be considered a discovery, but there are some details that need to be understood to be sure that there are no subtleties with that number.  Note that this is not a 5.2 sigma detection of inflationary gravitational waves!  For that, they need enough data to show their observed data agrees in detail with the predictions of inflation.  The 5.2 sigmas refers to the level of the detection of B-mode polarization that is not merely due to lensing.

They can only disfavor the possibility that their measurement is caused by dust or by synchroton radiation at the 2.3 sigma level, however.  This may be something to watch.

A Point of Concern

One thing you can worry about is that the points at large multipoles are systematically higher than expected from lensing.  Why is that?  Could it suggest an effect that is being neglected that could also affect small multipoles where they’re making their big claim of discovery?  The more I look at this, the more it bothers me; see the figure below.  

My concern: the three data points circles in blue are all higher than they should be, by about 0.01, which is the same height as the points to their left.  But the prediction of gravitational waves from inflation, circles in green, is that there should be very little contribution here --- which is why these points should lie closer to the solid red "lensing" prediction.  So the model of lensing for the right-hand part of the data + gravitational waves from inflation for the left-hand part of the data does not seem to be a very convincing fit.

My concern: the three data points circles in blue are all higher than they should be, by something approximately 0.01, which is the same height as the points to their left. (The two points to their right aren’t higher than they should be, but the uncertainties on those points [the vertical bands passing through them] are very large.) But the prediction of gravitational waves from inflation, circles in green, is that there should be very little contribution here — which is why these points should lie closer to the solid red “lensing” prediction. So the model of lensing for the right-hand part of the data + gravitational waves from inflation for the left-hand part of the data does not seem to be a very convincing fit.

The effects of “gravitational waves” (dashed lines) should be very small around Multipole of 200, but in fact (comparing the solid lensing prediction with the black dots data) they seem to be as large as they are around Multipole of 80.  One might argue that this actually disfavors, at least somewhat, the interpretation in terms of gravitational waves.  However, this may be too hasty as there may be other aspects of the data, not shown on this plot, that support the standard interpretation.  I’ll be looking into this in coming days.  [And I’ve just notice that David Spergel is also concerned about this — he also points out this anomaly shows in a poor fit in Figure 9 of the paper, and that there are also problems, at *low* multipoles, in Figure 7.   Definitely things to worry about here…]

[[However, this point was addressed by the BICEP2 folks in their presentation.  Their view is that (1) the high data points are not very statistically significantly high, and (2) with new data that they haven’t released from their third-generation experiment, they don’t see the same effect.  So this is presumably what gives them confidence that the excess is a temporary, statistical fluke that will go away when they have more data.]]

How It Compares with Planck Data

 After the results of the Planck satellite, described here and here, the best estimate for the “tilt” n_s of the power spectrum (which measures how much the fluctuations from inflation fail to be a simple fractal, roughly speaking) versus the “tensor-to-scalar ratio” r (which tells you how large the gravitational waves generated during inflation were, and thus how much dark energy there was), the  most likely value for r was zero, but with 0.2 still basically allowed.  This is shown in the orange region in the figure below, also from the BICEP2 paper, which shows Planck combined with a couple of other measurements.  [But strangely, this orange region does not agree with the one shown most recently by Planck; it looks out of date!  this is because they allow for the possibility that the tilt changes over time (thanks commenter Paddy Leahy — but Kev Abazajian, one of the experts, has complained they didn’t to it consistently.  More on this in the next-to-next figure.]  The blue region is the new situation —  not BICEP2 alone, but the combination of BICEP2 with Planck and the other experiments.  BICEP2 favors a value of r between 0.1 and 0.35, with 0.2 preferred, and the combination of BICEP2 with the other experiments now makes the range 0.13 and 0.25 preferred, with 0 highly disfavored.  That means that, as long as BICEP2 has made no errors and encountered no unknown surprises in the heavens, and as long as we interpret the data in the most conventional way, the preference in current data is now for a gravitational wave signal from inflation.
From the BICEP2 paper, showing the region of n_s and r that is preferred by the data.  The orange region is the preferred  region before BICEP2, and the blue region is the preferred region after BICEP2 is included in the combination of experiments.  The possibility of r=0 (no gravitational waves) is now highly disfavored.
From the BICEP2 paper, showing the region of n_s and r that is preferred by the data. The orange region is the preferred region before BICEP2, and the blue region is the preferred region after BICEP2 is included in the combination of experiments. The possibility of r=0 (no gravitational waves) is now highly disfavored.

Just to clarify what the orange regions are, and emphasize a point: in the figure below is Planck’s data (nothing about BICEP there)  [thanks to Oliver DeWolfe for digging this up.]  If you compare the blue region of the figure below — Planck data interpreted in inflationary models in which n_s is a constant as the universe inflates — with BICEP2 data, which prefers r around 0.15-0.3, you would conclude that inflationary models with n_s = constant are disfavored.  But models where n_s varies a little bit, which fill the orange region, are much more consistent with BICEP2.  Conclusion: if you take Planck and BICEP2 at face value, n_s is probably not a constant — which might mean yet another discovery!

From an earlier paper by the Planck collaboration.  The orange regions are the same one in the previous figure; however the blue regions mean something else entirely.  The blue regions refer to simple inflation models where the tilt n_s is constant as inflation proceeds.  The orange regions allow for the possibility that n_s slowly varies as inflation proceeds; not surprisingly, allowing for additional flexibility produces a larger region.  Conclusion: if you take Planck and BICEP2 at face value, n_s is probably not a constant --- another discovery!
From an earlier paper arXiv:1303.5082 by the Planck collaboration, combining Planck data with a couple of other studies. The orange regions are the same one in the previous figure; however the blue regions mean something else entirely, having nothing to do with BICEP2. The blue regions refer to simple inflation models where the tilt n_s is constant as inflation proceeds. The orange regions allow for the possibility that n_s slowly varies as inflation proceeds; not surprisingly, allowing for additional flexibility produces a larger region.

But it’s a little early, still, to be sure about that.  For one thing, the true value of r is likely to be lower than what BICEP2 says right now — because of a well-known statistical bias.  Discoveries tend to be on the high side, just for statistical reasons: if an experiment has a statistical fluke on the low side, they will discover an effect later, when early discoveries tend to involve a statistical fluke on the high side.  So the value of r might well be 0.1 – 0.15, despite what BICEP2 says now.

Be More Cautious than the Media

As always, I have to caution you that although I’m fairly impressed, and reasonably optimistic about this measurement, it is a measurement by only one experiment.  Until this measurement/discovery is confirmed by another experiment, you should consider it provisional.  Although this is too large a signal to be likely to be due to a pure statistical fluke, it could still be due to a mistake or problem, or due to something other than gravitational waves from inflation.  The history of science is littered with examples; remember the 2011 measurement by OPERA that showed neutrinos moving faster than the speed of light was far too large to be a statistical fluke.  Fortunately there will be other experiments coming and so we’ll have a chance for various experiments to either agree or disagree with each other in the very near future.

What It Means if it’s True

If this measurement is correct, and if indeed it reflects gravitational waves from inflation in the most conventional way, then it would tell us that inflation occurred with a dark energy per unit volume (i.e. dark energy density) that is comparable to the energy scales associated for decades with the energy and distance scale at which all the known non-gravitational forces would naively have about the same strength — the so-called “unification of coupling constants”, sometimes extended to “grand unification” in which the various forces actually turn out to be manifestations of just a single force.  This would be very remarkable,  though not necessarily evidence for unification.  There are other ways to get the same scale, which is about 100 times lower in energy (100,000,000 times lower in energy per unit volume)  than the scale of quantum gravity (the Planck scale, which, roughly, tells you the energy density required to make the smallest possible black hole.)

Share via:

Twitter
Facebook
LinkedIn
Reddit

91 Responses

  1. Good web site you have here.. It’s difficult to find quality writing like yours nowadays. I truly appreciate individuals like you! Take care!!

  2. Thanks for your marvelous posting! I seriously enjoyed reading it, you’re a great author. I will make sure to bookmark your blog and will often come back someday. I want to encourage yourself to continue your great writing, have a nice morning!

  3. My brother suggested I might like this blog. He was totally right. This publish actually made my day. You cann’t believe simply how a lot time I had spent for this information! Thank you!

  4. Hey There. I found your blog using msn. This is
    an extremely well written article. I’ll make sure to bookmark it and come back to
    read more of your useful information. Thanks for the post.

    I will definitely comeback.

  5. Matt, a question on inflation. I got the idea that our universe may look flat (more or less) and homogenous because inflation has blown up its size by an incredible factor, “flattening” out the inhomogeneities.
    Does this idea necessarily imply that the visible universe that we can see is only a very tiny fraction of the entire universe?
    I mean, if the original universe in its tiny size had considerable inhomogeneities, and then it is blown up, the larger universe will still show the same inhomogeneities as before, only larger by the same scale factor as the whole thing has grown. So it seems to me the argument works only if it says that we can se only a very small part of the entire universe, and if one assumes that the small parts (or at least the one we live in) are flatter and more homogeneous as the whole thing.

    My second question, somehow related, is whether the new measurements (or older ones) give some hint which percentage of the entire universe out visible universe is? For example, can we see one percent of the whole thing, or only a very tiny tiny fraction, or is it impossible to have any idea of this ratio?

    1. Markus,

      IANACOPP but yes you are on the right track. Not sure if it is possible to figure out a percentage as we have no idea what happened in other parts of the universe outside the visible part. One idea says that inflation could continue for ever in some parts.

    1. Or if possible could posts have the first paragraph only displayed and a MORE button so you could read the full novel if you were motivated.

      Many comments here seem to be “write only” to satisfy someone’s ego rather than cooperate with this site.

  6. I seem to remember you have “speculating/questioning” that perhaps QFT breaks down above 100TeV ( in relation to the hierarchy problem ). So just wondering if these results, if confirmed, might eventually provide evidence that QFT is good all the way to the GUT scale?

    1. Yes, I think so. Which makes it all the harder to explain why the Higgs mass is so low.
      Also, it would provide evidence that the height of the potential in a QFT has the gravitational effect that we expected. Which makes it all the harder to explain why the energy density of dark energy is so low. Or to put it another way – why is inflation so slow now when it was much faster then?

  7. My uninformed layman take on the discussed things in this thread, after revising the new universe I am now looking at:

    – The observations are likely real.

    The attending scientists were impressed, and the data checks out on _3_ instruments! BICEP1, BICEP2 and “Keck” (bad name, since there is also a Keck optical telescope). The spectra is not a loose cable.

    It’s not dust or local interference either, the spectra behaves like CMB expectations and not like those. Points of contention: Few “l modes” used. The spectra of lensing gravitation as discussed here.

    Of course we need confirmation, the analysis could still be wrong. Planck will soon release their polarization data I think. (Why else would BICEP2 do their release now? =D)

    – The observations are consistent with Planck.

    True, the simplest inflation models may go. Too high energy, field strength and need for “spectral running” and “tensor” gravity modes.

    But the tilt, the “spectral running”, is smack on what Planck (and I think WMAP) predicted at 1-2 sigma resolution unless I’m mistaken, ~ – 0.015. (I had to check for myself, so…)

  8. Reblogged this on In the Dark and commented:
    Following on from yesterday’s news, here’s a more detailed analysis of the implications of the BICEP2 result. I certainly agree with the statement highlighted in red:
    Until this measurement/discovery is confirmed by another experiment, you should consider it provisional. Although this is too large a signal to be likely to be due to a pure statistical fluke, it could still be due to a mistake or problem, or due to something other than gravitational waves from inflation.

  9. c jenk: “Thus the universe has always been expanding through various transitions, and there is no ultimate beginning to explain. ”
    All you have done is substitute infinite regression in place of an explanation. And I’m sure you have criticised Christians for precisely the same thing: “If God created the universe, then what created God?”
    Science is the quest for explanations. You – and eternal inflation – suggest giving up that quest.

  10. I analyze the statistics of log-scale data in electronics experiments, for example measurements in dB. I have convert to linear, do the stats, and convert back. If I skip this step, there are many obvious problems. As a result, I consider this step to be mandatory in order to achieve accurate results. Is this method also required in physics? If so, was the claimed statistical result computed on a linear scale? I hope this is an independent question from whether or not it is acceptable to add up the logged curves to improve the fit of the result.

    Thanks for the article. It was weird to hear about it first on the radio.

    1. In the BICEP2 paper (arXiv:1403.3985), their Figure 2 (the B-mode power spectrum is at lower left), and most of the other figures, are all plotted on linear scale. It is definitely required that statistical errors be computed on a linear scale (unless you propagate all the derivatives through ln(), which is a wonderfully nasty homework problem for the first years :-).

      It’s only their Figure 14, the one Matt reproduced, which has their data plotted on a log scale. It looks to me that they did so simply to allow them to also show all of the prior upper-limit results, which are one to four orders of magnitude higher.

    2. Of course it doesn’t matter-as you remark it’s a question of convenience (easier to obtain accurate results) not of principle-it can’t be. Changing variables can’t have anything to do with physics.

  11. Until the 1960s, many people did not take cosmology seriously. While the musings of cosmologists were fascinating, they remained just that for lack of definitive data. The situation began to change with the discovery of the CBR in 1964. In the wake of that discovery, additional relevant data began to be found, and many new theories were developed. One of the more peculiar ideas was the realization that particle physics had much to tell us about cosmology. The peculiarity comes from considering the intimate relationship between the study of the smallest things (particle physics) and the largest thing (cosmology). Since the 1970s many exciting things have been happening in cosmology.

    What follows in this chapter is a discussion of various cosmological ideas, in which it may often appear as if the author agrees with these ideas or with the big-bang theory. We should emphasize that this is only for the sake of discussion. In a later chapter we will see how the big-bang cosmology and related ideas discussed here are in conflict with the creation account in the book of Genesis. To discuss these concepts for now it is easiest to treat them as if they are acceptable, setting aside for a time the question of whether they are consistent with a biblical world view. In other words, we ask that you put on a “big-bang hat” to engage in this discussion. Please do not take from the discussion in this chapter that the author supports the big-bang model or that he has any enthusiasm for it.

    The Rate of Expansion and the Flatness Problem

    As the universe expands, the rate of expansion is slowed by the gravity of matter in the universe. An analogy can be made to an object that is projected upward from the surface of the earth. The speed of the object will slow due to the earth’s gravity. For small speeds the object will quickly reverse direction and fall back to the earth. As the initial speed is increased, the object will move to higher altitudes before falling back to earth. There is a minimum speed, called the escape velocity, for which the object will not return to the earth’s surface. At the earth’s surface the escape velocity is about 25,000 mph. Theoretically, an object moving at escape velocity will eventually arrive at an infinite distance from the earth with no remaining speed. Objects moving faster than the escape velocity will never return, but they will never come to rest. Space probes to the moon or other planets must be accelerated above the escape velocity. The more that their speeds exceed the escape velocity, the shorter time their trips will take.
    Illustration of a spaceship taking off from a planet, with text indicating the mass and radius of the planet, the ship’s escape velocity, and the mathematical formula for determining that velocity
    Image courtesy of Bryan Miller

    Escape velocity of a spaceship

    The universe should behave in a similar way. If the expansion is too slow, gravity will eventually reverse the direction so that the universe will contract once again. This presumably would lead to a sort of reverse of the big bang that is usually called the “big crunch.” This would also result in a finite lifetime for the universe. If the expansion exceeds some value akin to the escape velocity, the expansion will be slowed, but not enough to reverse the expansion. In this scenario the universe will expand forever, and as it does its density will continually decrease.

    The escape velocity of the earth depends upon its mass and size. In a similar fashion, the question of whether our universe will expand forever or contract back upon itself depends upon the size and mass of the universe. An easier way to express this is in terms of one variable (rather than two) such as the density, which depends upon both mass and size. There exists a critical density above which the universe will expand forever and below which it will halt expansion and collapse upon itself. If the universe possesses the critical density, its expansion will asymptotically approach zero and never collapse.

    One of the parameters used to describe the universe is Ω (the Greek letter omega), defined to be the ratio of the total gravitational potential energy to the total kinetic energy. Gravitational potential energy is energy that an object possesses because of its mass and any gravity present. On the earth, some object with elevation has gravitational potential energy. Examples would include a car parked on a hill or water behind a dam. The higher the hill or the higher the dam, the more energy there is. The more powerful hydroelectric dams are those that are higher and have larger amounts of water behind them. As the water is allowed to fall from its original height and pass through a turbine, the gravitational potential energy is converted to electrical energy. Kinetic energy is energy of motion. A speeding bullet contains far more energy than a slowly moving bullet.

    Since the universe has mass and hence gravity, it must have gravitational potential energy as well. The expansion of the universe represents motion, so the universe must have kinetic energy as well. As the universe expands, the gravitational potential energy will change. At the same time, gravity will slow the rate of expansion so that the amount of kinetic energy will change as well. Generally the two energies will not change in the same sense or by the same amount so that Ω will change with time. A value of Ω 1 means that the gravitational potential energy exceeds the kinetic energy. If a big-bang universe began with Ω 1 at the beginning of the universe, then Ω should have increased in value. Therefore, over billions of years the value of Ω should have dramatically changed from its initial value. For several decades all data have suggested that while Ω is indeed less than 1, it is not much less than 1. The sum of all visible matter in the universe produces an Ω equal to about 0.1. The prospect of dark matter pushes the value of Ω closer to 1.

    The fact that Ω is very close to 1 today suggests that the universe began with Ω almost, if not exactly, equal to 1. If Ω were only a few percent less than 1 initially, then the evolution of the universe since the big bang should have produced an Ω dramatically less (many orders of magnitude) than 1 today. How close to 1 did the value of Ω have to be at the beginning of the universe to produce the universe that we see today? The value depends upon certain assumptions and the version of the big bang that one uses, but most estimates place the initial value of Ω equal to 1 to within 15 significant figures. That is, the original value of Ω could not have deviated from 1 any more than the 15th place to the right of the decimal point. Why should the universe have Ω so close to 1? This problem is called the flatness problem. The name comes from the geometry of a universe where Ω is exactly equal to 1. In such a universe space would have no curvature and hence would be flat. There are several possible solutions to the flatness problem.
    Illustration of a top-like shape and a funnel-like shape with text: “If the value of Ω were too large the universe would have ceased expanding long ago and collapsed in on itself. If Ω were too small, then the universe would have rapidly expanded to the point that the density would have been too low for stars and galaxies to form.”
    Image courtesy of Bryan Miller

    One possible answer to the flatness problem is that this is just how the world happens to be. While this is not a physical impossibility, it does raise some troubling questions, at least for the atheist. It seems that the initial value of Ω could have been any number, but only a very small range in values could have led to a universe in which we exist. If Ω were too small, then the universe would have rapidly expanded to the point that the density would have been too low for stars and galaxies to form. Thus there could have been no planets and no life. Ergo, we would not have evolved to observe the universe. If on the other hand the value of Ω were initially too large, the universe would have ceased expanding long ago and contracted back to a “big crunch.” This would not have allowed enough time for us to evolve. Either way, we should not exist. Therefore the correct conditions that would have allowed our existence were present in the universe from the beginning.

    The Anthropic Principle

    Nor is the value of Ω the only feature of the universe fit for our existence. Scientists have identified a number of other parameters upon which our existence depends. Examples include the masses and charges of elementary particles, as well as the constants, such as the permittivity of free space, that govern their interactions. If some of these constants had slightly different values, then stable atoms as we know them would not be possible or the unique properties of carbon and water upon which life depends would not exist. All of these quantities are fundamental, that is, they do not depend upon other parameters, but are instead numbers that had to assume some values. There is no reason why those constants have the values that they have, other than the fact that they just do. Of all the random permutations of the constants that could have occurred, our universe exists as it does with these particular numbers. What is the probability that the universe would assume parameters that would be conducive to life, or even demand that life exist? To some it appears that the universe is designed; from its beginning the universe was suitable for our existence. In the early 1970s a scientist named Brandon Carter dubbed this line of reasoning the anthropic principle.1

    To many Christians this constitutes strong evidence of God’s existence and has become part of their apologetics.2 Of course, use of the anthropic principle assumes that the big-bang cosmogony is correct. There is much difficulty in reconciling the big bang to a faithful rendering of the Genesis creation account, a topic that will be explored in a later chapter.

    To atheists and agnostics the case is not nearly as clear. How do they resolve this issue? They try several approaches. One is to argue that the probability question has been improperly formulated. They maintain that one should ask what the probability of the existence of something is only before that something is actually observed. Once the object in question is known to exist, its probability that it exists with specified characteristics is 1, no matter how unlikely it may seem to us.

    I can use myself as an example. If one considers the genetic makeup of my parents, it is obvious that there were literally billions of different combinations of children that my parents could have had. Each potential child would have had unique features, such as sex, height, build, and eye and hair color, to mention just a few. My parents only had two children, so it would seem that I am extremely improbable. Yet, when people meet me for the first time, they are not (usually!) amazed by my existence. Most people recognize that given that I exist, I must exist in some state. Therefore the probability that I exist as I do is 1. They argue that the incredible odds against my having the traits that I have only make sense if the probability were asked before I was conceived. In like fashion the universe exists, so the probability that it exists as it does must be 1. Therefore, they claim, we should not be shocked that the universe exists as it does.

    How does one respond to this answer? We shall see in chapter 4 that a similar argument is used against the work of the astronomer Halton Arp, so the discussion there would apply here as well. We will repeat some of that here. We use probability arguments all of the time to eliminate improbable explanations. DNA testing is now used in many criminal cases. If there is a tissue sample of the perpetrator of a crime left at the scene of the crime, then DNA often can be extracted. The sample may be skin or blood cells, hair, or even saliva on a cigarette butt. Comparison of the DNA from the sample with DNA extracted from a suspect can reveal how well the two DNA samples match. Often this is expressed as how improbable it would be for two people selected at random to share the same DNA. If the probability were as little as one in a million, then that would be considered solid evidence of guilt to most people. However, a defense attorney may argue that as unlikely as a match between his innocent client and the truly guilty party is, the match actually happened so the probability is 1. That argument alone without any other evidence to exonerate the defendant is obviously very lame and would not convince any competent juror. Yet, this answer to Arp’s work asks us to believe a similar argument.

    Graph with vertical axis labeled “Time” and horizontal axis labeled “Size of Expansion” containing a series of shapes. Shapes on the left of the graph have closed tops, and shapes on the right of the graph have tops that open more and more widely.
    On the left of the illustration are those universes that collapsed back on themselves before life could begin and on the right are those universes that expanded too quickly and will continue to expand forever.

    Image courtesy of Bryan Miller

    There are other possible answers to the anthropic principle. For instance, some cosmologists suggest that our universe may not be unique.3 Our universe may be just one of many or even infinite universes. This concept of a “multi-verse” will be discussed further shortly. In this view each separate universe has its own unique properties, a few having properties that allow for life, but most being sterile. We could not exist in most of the universes, so it should not surprise us that we exist in a universe that is conducive for life. This explanation gets very close to the essence of the response to the anthropic principle discussed above. The only difference is that this answer seeks to explain our existence by appealing to a large sample size. The reader should note that this sort of answer is hardly scientific (how could it be tested?), and amounts to rather poor philosophy at best.

    Inflation

    Returning to the flatness problem, a radically different answer was pursued in the early 1980s. Late in 1979 Alan Guth suggested that the early universe might have undergone an early rapid expansion. According to this scenario, shortly after the big bang (somewhere between 10-37 and 10-34 seconds after the big bang) when the universe was still very small, the universe quickly expanded in size by many orders of magnitude (the increase in the size of the universe might have been from the size of an elementary particle to about the size of a grapefruit). This behavior has been called inflation. Inflation would have happened far faster than the speed of light. To some people this appears to be a violation of Einstein’s theory of special relativity, which tells us that material objects cannot move as fast as the speed of light, let alone faster than light. However, in the inflationary model objects do not move faster than the speed of light, but rather space expands faster than light and carries objects along with it. The initial value of Ω may have not been particularly close to 1, but as a result of inflation it was driven to be almost identically equal to 1. Therefore the universe was not fine-tuned from the beginning, but rather was forced to be flat through a very natural process. Inflation solves the flatness problem without invoking the anthropic principle as another potential difficulty.

    Inflation can explain several difficulties other than the flatness problem. One of these is the homogeneity of the universe. The CBR appears to have the same temperature in every direction. If two objects that have different temperatures are brought together so that they may exchange heat, we say that they are in thermal contact. Once the two objects no longer exchange heat while still in thermal contact, they must have the same temperature and we say that they have come into thermal equilibrium. Regions of the universe that are diametrically opposite from our position and from which we are now receiving the CBR have yet to come into thermal contact, yet those regions have the same temperature. How can that be if they have not been in thermal contact before? This problem is often called the horizon problem, because parts of the universe that should not have come into contact yet would be beyond each other’s horizon. In an inflationary universe, very small regions of the universe could have come into thermal equilibrium before inflation happened. After inflation, the regions could have been removed from thermal contact until thermal contact was reestablished much later. With this possibility, widely dispersed regions had been in thermal equilibrium earlier, so it is not surprising that they are still in thermal equilibrium.

    Illustrations of three fields: gravitational, magnetic, and electric
    Image courtesy of Bryan Miller

    Examples of fields

    What mechanism drives inflation? Two classes of solutions have been suggested. One possibility is an energy field, called an “inflaton,” that fills the universe. Fields are used in physics to describe a number of phenomena. Examples of fields are gravitational fields that surround masses, electric fields around charges, and magnetic fields around magnets. Fields can be thought of as permeating and altering space. The release of the inflaton’s energy would have powered inflation.

    An alternate suggestion is that inflation was powered by a process that is sometimes called “symmetry breaking.” There are four recognized fundamental forces of nature: gravitational force, the electromagnetic force, and the weak and 9s. All observed forces could be described as manifestations of one of these fundamental forces. The history of physics is one of gradual unification of various, apparently disparate, forces. For instance, during the early and middle parts of the 19th century, a series of experimental results suggested that electrical and magnetic phenomena were related. A set of four equations formulated by James Clerk Maxwell unified electricity and magnetism into a single theory of electromagnetism. During the 1970s a theory that united electromagnetic forces with the weak nuclear force was established. In fact, Steven Weinberg, whose very famous popular-level book on the big bang, The First Three Minutes, shared the 1978 Nobel Prize in physics for his contribution in this unification. While the electromagnetic and weak nuclear forces have different manifestations today, the unification of these two forces into a single theory means that they would have been a single phenomenon at the much higher temperatures present in an early big-bang universe. With this unification we can say that there are now three fundamental forces of nature.

    Most physicists believe that all the forces of nature can be combined into a single theory. Work is progressing on a theory that will unify all of the fundamental forces, save gravity. Gravity is believed to be hard to unify with the others, because gravity is so much weaker than the other forces. If and when such a theory is found, it will be called a grand unified theory (GUT). Physicists hope that one day gravity can be combined with a GUT to produce a theory of everything (TOE). Much research is dedicated to finding a GUT, and there are several different approaches to the search. Almost all involved agree that the unification of forces would only happen at very high energies and temperatures. This is why attempts at developing a GUT require the use of huge particle accelerators—bigger accelerators produce higher energies. Cosmologists think that the temperature of the very early universe would have been high enough for all of the forces of nature to be unified. This unity of forces represents a sort of symmetry. As the universe expanded and cooled, the forces would have separated out one by one. Being the weakest by far, gravity would have separated first and then been followed by the others. Each separation would have been a departure from the initially simpler state, introducing a form of asymmetry in the forces of nature. Therefore the separation of each force from the single initial force is called symmetry breaking.

    Symmetry breaking is similar to a phase transition in matter. When ice melts, it requires the absorption of energy that cools the environment of the ice. Likewise when water freezes it releases energy into the environment. When symmetry breaking occurs, energy is released into the universe. This energy powers the inflation. Many cosmologists think that it is possible that the universe could undergo another symmetry-breaking episode with potentially cataclysmic results for humanity. Of course, without any knowledge of the relevant physics required, it is impossible to predict when or even if such a thing is likely.

    Since its inception there have been thousands of papers written about the inflationary universe, and there have been more than 50 variations of inflationary theories proposed. Because inflation has been able to explain several difficult problems, it will probably remain a major player in big-bang cosmology for some time to come. Almost no one has noticed that there are no direct observational tests for inflation, its appeal being directly a result of its ability to solve some cosmological problems. The inflation model plays an important role in origin scenarios of the big bang, as we shall see shortly.

    String Theory

    Another new idea important in cosmology is string theory. String theory posits that all matter consists of very small entities that behave like tiny vibrating strings. In addition to the familiar three dimensions of space, string theory requires that there be at least six more spatial dimensions. This brings the total number of dimensions to ten, nine spatial and one time dimension. Why have we not noticed these extra dimensions? Since the early universe, these dimensions have been “rolled up” into an incredibly small size so that we cannot see them. Nevertheless, these dimensions would have played an important role in the behavior of matter and the universe early in its history. This introduces the relationship between cosmology and particle physics. The unification of physical laws presumably existed in the high energy of the early universe. Since the interactions of fundamental particles would have been very strong in the early universe, the proper theory of those interactions must be included in cosmological models.

    Many popular-level books have been written on string theory. Even the Christian astronomer (and progressive creationist) Hugh Ross has weighed in with a treatise4 where he invokes string theory to explain a number of theological questions. What is easy to miss in all of these writings is that string theory is a highly speculative theory for which there is yet no evidence. It may be some time before this situation changes. Among cosmologists the tentative nature of string theory is recognized, and there are other possible theories of elementary particles.

    Dark Matter

    Galaxies tend to be found in groups called clusters. Large clusters of galaxies may contain over a thousand members. Astronomers assume that these clusters are gravitationally bound; that is, that the members of a cluster follow stable orbits about a common center of mass. In the 1930s the astronomer Fritz Zwicky measured the speeds of galaxies in a few clusters. He found that the individual galaxies were moving far too fast to be gravitationally bound, a fact since confirmed for many other clusters. This means that the member galaxies are flying apart and over time the clusters will cease to exist. The break-up time of a typical cluster is on the order of a billion years or so, far less than the presumed age of the clusters. Some creationists cite this as evidence that the universe may be far younger than generally thought. In other words, the upper limit to the age of these structures imposed by dynamical considerations might be evidence left by our Creator.

    To preserve the antiquity of clusters of galaxies, astronomers have proposed that the clusters contain much more matter than we think. There are two ways to measure the mass of a cluster of galaxies. One is to measure how much light the galaxies in the cluster give off (luminous mass). Counting the number of galaxies involved and measuring their brightnesses give us an estimate of the mass of a cluster. Studies of the masses and total light of stars in the solar neighborhood give us an idea of how much mass corresponds to a given amount of light. The second way to estimate the mass is to calculate how much mass is required to gravitationally bind the members of the cluster given the motions of those members (dynamic mass). Comparison of these two methods shows that in nearly every case the dynamic mass is far larger than the luminous mass. In some cases the luminous mass is less than 10% of the dynamic mass.
    Photograph of stars behind a pie chart indicating 73% dark energy, 23% dark matter, 3.6% intergalactic gas, and 0.4% stars, etc.
    Image courtesy of NASA

    If the dynamic mass calculations are the true measure of the masses of clusters of galaxies, then this suggests that the vast majority of mass in the universe is unseen. This has been dubbed dark matter. If this were the only data supporting the existence of dark matter, then suspicion of the reality of dark matter would be quite warranted. However, in 1970 other evidence began to mount for the existence of dark matter. In that year an astronomer found that objects in the outer regions of the Andromeda Galaxy were orbiting faster than they ought. This was unexpected. Gravitational theory suggests that within the massive central portion of a galaxy, from which most of its light originates, the speeds of orbiting objects should increase linearly with a distance from the center. This is confirmed by observation. However, theory also suggests that farther out from the central portion of a galaxy (beyond where most of the mass appears to be) orbital speeds should be Keplerian. Orbiting bodies are said to follow Keplerian motion if they follow the three laws of planetary motion discovered by Kepler four centuries ago. An alternate statement of Kepler’s third law is that orbital speeds are inversely proportional to the square root of the distance from the center. What was found instead is that the speeds of objects very far from the center are independent of distance or even increase slightly with distance. Similar behavior has been found in other galaxies, including the Milky Way.

    This strange behavior for objects orbiting galaxies at great distances is independent evidence for dark matter, but it also tells where dark matter resides. If these objects are truly orbiting, then basic physics demands that much matter must exist within the orbits of these bodies, but beyond the inner galactic regions where most of the light comes. These outer regions are called the halos of galaxies. Since there is little light coming from galactic halos, this matter must be dark. Estimates of the amount of halo dark matter required to produce the observed orbits are consistent with the estimates from clusters of galaxies. Both suggest that, like an iceberg, what we see only accounts for about 10% of the mass.

    What is the identity of dark matter? There have been many proposed theories. “Normal” matter consists of atoms made of protons, neutrons and electrons. The masses of the neutron and proton are very similar, but the mass of the electron is about a factor of 1,800 less massive than the proton or neutron. Protons and neutrons belong to a class of particles called baryons. Since most of the mass of atoms is accounted for by baryons, “normal” matter is said to be baryonic. We would be most comfortable with baryonic solutions to the dark matter question, but baryonic matter is difficult to make invisible. While faint stars are by far the most common type of stars and hence account for most stellar mass, low mass stars are so faint that the light of galaxies is dominated by brighter, more massive stars. However, even if dark matter consisted entirely of extremely faint stars, their combined light would be easily visible. If the matter were in much smaller particles such as dust, the infrared emission from the dust would be easily detected. Some have proposed that dark matter is contained in many planet-sized objects. This solution, dubbed MACHO (for MAssive Compact Halo Object), avoids the detectable emission of larger and smaller objects just mentioned. There has been an extensive search for MACHOs, and there is some data to support this identification though this is still controversial.

    More exotic candidates for dark matter abound. Some suggest that dark matter consists of many black holes that do not interact with their surroundings enough to be detected with radiation. Another idea is that if neutrinos have mass, then large clouds of neutrinos in galactic halos might work. During the summer of 2001 strong evidence was found that neutrinos indeed have mass. Alternatively, heretofore-unknown particles have been proposed. One is called WIMPS, for Weakly Interacting Massive ParticleS. Obviously MACHO was named in direct competition with WIMPS. The identity of dark matter is another example of how cosmology and particle physics could be intimately related.

    The relationship of dark matter to cosmology should be obvious. The fate of the universe is tied to the value of Ω, and Ω depends upon the amount of matter in the universe. If 90% of the matter in the universe is dark, then Ω could be very close to 1, and dark matter would have a profound effect upon the evolution of the universe over billions of years. The presence of dark matter would have been vitally important in the development of structure in the early universe. The universe is generally assumed to have been very smooth right after the big bang. This assumption is partly based upon simplicity of calculation, but also upon the unstable nature of inhomogeneities in mass. If the matter in the universe had appreciably clumped, then those clumps would have acted as gravitational seeds to attract additional matter and hence would have grown in mass. If those gravitational seeds were initially too great, then nearly all of the matter in the universe would have been sucked into massive black holes leaving little mass to form galaxies, stars, planets, and people. If, on the other hand, the mass in the early universe were too smooth, there would have been no effective gravitational seeds, and no structures such as galaxies, stars, planets, and people could have arisen. The range of homogeneity in which the initial conditions of the big bang existed and given rise to the universe that we now see must have been quite small. This is another example of the fine-tuning that the universe has apparently undergone that to some suggests the anthropic principle.

    If dark matter exists, then its role in a big-bang universe must be assessed. Most considerations include how much dark matter exists and in what form. The dark matter may be hot or cold, depending upon how fast the matter was moving. If the dark matter moved quickly then it is termed hot. Otherwise it is cold. The speed depends upon the mass and identity of dark matter. It should be obvious that at this time dark matter is a rather free parameter in cosmology.

    The COBE and WMAP Experiments

    The early universe must have had some slight inhomogeneity in order to produce the structure that we see today. If there were no gravitational seeds to collect matter, then we would not be here to observe the universe. Cosmologists have managed to calculate about how much inhomogeneity must have existed in the big bang. This inhomogeneity would have been present at the age of recombination when the radiation in the CBR was allegedly emitted. The CBR should be very uniform, but the inhomogeneity would have been imprinted upon the CBR as localized regions that are a little warmer or cooler than average. Predictions of how large the inhomgeneities should be led to the design of the COBE (COsmic Background Explorer, pronounced KOB-EE) satellite. COBE was designed to accurately measure the CBR over the entire sky and measure the predicted fluctuations in temperature.

    The two-year COBE experiment ended in the early 1990s with a perfectly smooth CBR. This means that temperature fluctuations predicted by models then current were not found. Eventually a group of researchers used a very sophisticated statistical analysis to find subtle temperature fluctuations in the smooth data. Variations of one part in 105 were claimed. Subsequent experiments that were more limited in scope were claimed to verify this result. These have been hailed as confirmation of the standard cosmology.

    However, there are some lingering questions. For instance, while the COBE experiment was designed to measure temperature variations, the variations allegedly found were an order of magnitude less than those predicted. Yet this is hailed as a great confirmation of the big-bang model. Some have written that the COBE results perfectly matched predictions, but this is simply not true. Since the COBE results, some theorists have recalculated big-bang models to produce the COBE measurements, but this hardly constitutes a perfect match. Instead, the data have guided the theory rather than the theory predicting the data.

    Another fact that has been lost by many people is that the alleged variations in temperature were below the sensitivity of the COBE detectors. How can an experiment measure something below the sensitivity of the device? The variations became discernable only after much processing of the COBE data with high-powered statistics. One of the COBE researchers admitted that he could not point to any direction in the sky where the team had clearly identified a hotter or cooler region. This is a very strange result. No one knows where the hotter or cooler regions are, but the researchers involved were convinced by the statistics that such regions do indeed exist. Unfortunately, this is the way that science is increasingly being conducted.

    The WMAP, with several parts labeled
    Image courtesy of NASA

    WMAP (Wilkinson Microwave Anisotropy Probe)

    To confirm the temperature fluctuations allegedly discovered by COBE, the WMAP satellite was designed and then launched early in the 21st century. WMAP stands for the Wilkinson Microwave Anisotropy Probe, and was originally designated MAP, but was renamed after David Wilkinson, one of the main designers of the mission, died while the mission was underway. WMAP was constructed to detect the faint temperature variations indicated by COBE, and WMAP did confirm those fluctuations. In early 2003 a research team used the first WMAP results along with other data to establish some of the latest measurements of the universe. This study produced a 13.7 billion year age for the universe, plus or minus 1%. It also determined that visible matter accounts for only a little more than 4% of the mass of the universe. Of the remaining mass, some 23% is in the form of dark matter, with the remainder 73% in an exotic new form dubbed “dark energy.” Dark energy will be described shortly.

    The Hubble Constant

    In the first chapter we saw that Hubble’s original measurement of H0 was greater than 500 km/sec Mpc, but that the value of H0 had fallen to 50 km/sec Mpc by 1960. The value of H0 remained there for more than three decades. In the early 1990’s new studies suggested that H0 should be closer to 80 km/sec. Astronomers who had for years supported the older value of H0 strongly attacked the new value, and so there was much conflict on this issue for several years.

    Chart with distance as the horizontal axis, velocity as the vertical axis, and a number of points and three lines in the chart area
    The Hubble constant describes how fast objects appear to be moving away from our galaxy as a function of distance. If you plot apparent recessional velocity against distance, as in the figure above, the Hubble constant is simply the slope of a straight line through the data.

    Besides professional pride, what else was at stake here? Not only can the Hubble constant give us the distance of galaxies, it can be used to find the approximate age of the universe. The inverse of the Hubble constant, TH, is called the Hubble time, and it tells us how long ago the big bang was, assuming that Λ is zero and neglecting any decrease in the expansion due to the self-gravity of matter in the universe. Since the universe must have undergone some sort of gravitational deceleration, the Hubble time is an upper limit to the age of a big-bang universe. If you examine the units of H0 you will see that it has the dimensions of distance over time and distance so that the distances cancel and you are left with inverse time. Therefore TH has the units of time, but the Mpc must be converted to kilometers and the seconds should be converted to years.

    For instance, a Hubble constant of 50 km/sec Mpc gives a TH of 1/50 Mpc sec/km. A parsec contains 3×1013 km, so an Mpc equals 3×1019 km. A year has approximately 3×107 seconds. Putting this together we get

    TH = (1/50 Mpc sec/km)( 3×1019 km/Mpc)(year/3×107 sec) = 2×1010 years.

    Therefore a Hubble constant of 50 km/sec Mpc yields a Hubble time of 20 billion years. Factoring in a reasonable gravitational deceleration gives the oft-quoted age since the big bang of 16 to 18 billion years.

    Cosmic Strings

    A brief mention should be made of cosmic strings, which must not be confused with the string theory of particles. Surveys of galaxies and clusters of galaxies show that they are not uniformly distributed. Instead, clusters of galaxies tend to lie along long, interconnected strands. If galaxies and other structures of the universe condensed around points that had greater than average mass and thus acted as gravitational seeds, then why are galaxies now found along long arcs? One possible answer is cosmic strings. Cosmic strings are hypothesized structures that stretch over vast distances in the universe. The strings are extremely thin but very long, and they contain incredible mass densities along their extent. Obviously cosmic strings are not made of “normal” matter. Cosmic strings were to act as gravitational seeds around which galaxies and clusters formed. There is yet no evidence of cosmic strings, and so this idea remains controversial.

    Since the Hubble time is inversely proportional to the Hubble constant, doubling H0 would halve TH. The suggestion that H0 should be increased to 80 km/sec Mpc decreased the Hubble time to about 12.5 billion years. Gravitational deceleration would have decreased the actual age of the universe to as little as 8 billion years. This ordinarily could be accepted, except that astronomers were convinced that globular star clusters, which contain what are thought to be among the oldest stars in our galaxy, were close to 15 billion years old. Thus a higher Hubble constant would place astronomers in the embarrassing position of having stars older than the universe.

    There were several possible ways to resolve this dilemma, and astronomers eventually settled upon a combination of two. First, the teams of astronomers who were championing different values for H0 found some common ground and were able to reach a consensus between their two values. At the time of the writing of this book (2003) the established value for H0 is 72 km/sec Mpc. This gives an age of the universe between 12 and 15 billion years, with the preferred value at the time of this writing as 13.7 billion years. Second, the ages of globular star clusters were reevaluated. We will not discuss how this was done in detail, but it involves properly calibrating color-magnitude diagrams of globular clusters. Calibration requires knowing the distance, and the Hubble Space Telescope provided new data that enabled us to more accurately know the distances of globular clusters. The recalibration reduced the ages of globular clusters to a range only slightly less than the new age of the universe. In the estimation of most cosmologists the uncertainty in both ages allows enough time for the formation of the earliest stars sometime after the big bang.

    This episode does illustrate the changing nature of science and the unwarranted confidence that scientists often place in the thinking of the day. Before this crisis in the age of the universe and the ages of globular clusters, most astronomers were thoroughly convinced that both of these ages were correct. Anyone who had suggested that globular clusters were less than 15 billion years old would have been dismissed rather quickly. However when other data demanded a change, necessity as the mother of invention stepped in, and a way to reduce the ages of globular clusters was found. The absolute truth of the younger ages has now replaced the absolute truth of the older ages. What most scientists miss is that, apart from crises, the new truth would never have been discovered. We would have blithely gone on totally unaware that our “objective approach” to the ages of globular clusters had for a long time failed to give us the “correct” value.

    The Return of the Cosmological Constant

    As discussed in chapter 1, Einstein had given a non-zero value to the cosmological constant to preserve a static universe, a move that he later regretted. For some time Λ equal to zero came into vogue, and many cosmologists frowned upon any suggestion otherwise. Actually the idea of non-zero Λ never really went away. For instance, by the 1950s many geologists were insisting that the age of the earth was close to the currently accepted value of 4.6 billion years, but the Hubble constant of the day was far too large to permit the universe to be this old. Some cosmologists proposed that a large Λ had increased the rate of expansion in the past so that the corresponding Hubble time gave a false indication of the true age of the universe. Just as gravitational deceleration can cause the actual age of the universe to be far less than the Hubble time, an acceleration powered by Λ can cause the actual age of the universe to be greater than the Hubble time. In the mid 1950s the cosmological distance scale was revised in such a fashion that the Hubble constant was decreased to pretty much what it is today with a corresponding increase in the Hubble time so as to produce a universe much older than 4.6 billion years. Therefore there did not seem to be much need for a non-zero Λ.

    After four decades of smugness, Λ has made a comeback. In 1998 some very subtle cosmological studies using distances from type Ia supernovae and linking several parameters of the universe suggested that the best fit to the data is that Λ has a small non-zero value. Since its reemergence astronomers have begun to call the cosmological constant “dark energy.” The cosmological constant corresponds to energy, because it does represent a repulsive force, and such forces always can be written as a potential energy. Einstein showed that energy and mass are equivalent, so cosmic repulsion can be viewed similarly to mass. Since neither cosmic repulsion nor dark matter can be seen, and since both critically affect the structure of the universe, it is appropriate to view the two in a similar way. As uncomfortable as this may be for some, cosmologists have been forced to reconsider the cosmological constant. Where this will lead is not known at the time of this writing.

    The value of Λ has ramifications in the future of the universe. In most discussions of cosmology, the future of the universe is tied to the geometry of the universe. These discussions are based upon the model developed by the Russian mathematician Alexandre Friedman in 1922, a model that is called the Friedman universe. The Friedman universe supposes that the value of Λ is zero. In the Friedman model, if the average density of the universe is below some critical density, then the universe is spatially infinite and it will expand forever. This corresponds to negative curvature where there are an infinite number of lines through a point that are parallel to any other line. If the average density of the universe is above the critical density, then the universe is spatially finite, though it is not bound. This universe will eventually cease expanding and reverse in a contraction. The geometry of this universe has positive curvature so that there are no parallel lines. The critical density depends upon the Hubble constant. The currently accepted value of the Hubble constant results in a critical density that is higher than the density of lighted matter in the universe. Dark matter and dark energy bring the total density of the universe very close to the critical density, though no one expects it to exceed the critical density.

    A universe that will expand forever is said to be open, while a universe that will cease expanding is called closed. Technically, the terms open and closed actually refer to the geometry of the universe, but with a Friedman universe they may refer to the ultimate fate of the universe as well. However, when Λ is not zero this relationship is altered. In such a universe, the open or closed status of the universe directly refers to the geometry via the density. For instance, a closed universe could expand forever. This is a fine point that many books on cosmology get wrong, because they only consider Friedman models. For many years only Friedman models were seriously considered. Since 1998 non-Friedman models have dominated cosmological thinking and with time this fine point will probably work its way into many books about cosmology.

    The Origin of the Universe

    The origin of the universe is a mysterious topic. For instance, the sudden appearance of matter and energy would seem to violate the conservation of energy (the first law of thermodynamics) and matter. Science is based upon what we can observe. Regardless of how or when the universe came into being, it was an event that happened only once in time (as we know time). No human being was present at the beginning of the universe, so one would expect that the origin of the universe is not a scientific question at all, but that has not kept scientists from asking whence came the big bang. As discussed further in the next chapter, some Christian apologists see in the big bang evidence of God’s existence. Their reasoning is that something cannot come from nothing, and so there must be a Creator. Cosmologists are well aware of this dilemma, and they have offered several theoretical scenarios whereby the universe could have come into existence without an external agent.

    One proposal originally offered by Edward Tryon in 1973 is that the universe came about through what is called a quantum fluctuation. As discussed in the beginning of chapter 1, quantum mechanics tells us that particles have a wave nature, and thus there is a fundamental uncertainty that is significant in the microscopic world. By its very nature a wave is spread out so that one cannot definitely assign a location to the wave. Usually this principle is called the Heisenberg uncertainty principle, named for the German physicist who first deduced it. The uncertainty principle can be stated a couple of different ways. One statement involves the uncertainty in a particle’s position and the uncertainty of a particle’s momentum. Momentum is the product of a particle’s mass and velocity. Whenever we measure anything, there is uncertainty in the measurement. The Heisenberg uncertainty principle states that the product of the uncertainty in a particle’s position and the uncertainty in a particle’s momentum must be no less than a certain fundamental constant. In mathematical form this formulation of the uncertainty principle appears as

    Δx Δp ħ/2

    where Δx is the uncertainty in the position of a particle and Δp is the uncertainty in the momentum of a particle. The fundamental constant is ħ, called h-bar, and is equal to 1.055 x 10-34 joule-second.

    What the uncertainty principle means is that the more accurately that we know one quantity (the lower that its uncertainty is), the less accurately we know the other quantity (the greater that its uncertainty is). If we measure the position of a small particle such as an electron very precisely, then we know very little about the particle’s momentum. Since we know the mass of an electron pretty well, the uncertainty in the momentum is mostly due to our ignorance of the electron’s speed. If on the other hand we know the particle’s speed to a high degree of accuracy, we will not know the particle’s position very well. Recall from the discussion in chapter 1 that this is a fundamental uncertainty, and not merely a limitation imposed by our measuring techniques. That is, even if we had infinite precision in our measuring techniques, we would still have the limitation of the uncertainty principle.

    This behavior seems rather bizarre, because it is not encountered in everyday experience. The reason is that the wavelengths of large objects are so small that we cannot see the wave nature of macroscopic objects. Another way of looking at it is that his very small, so small that the uncertainties in position and momentum of macroscopic systems is completely dwarfed by macroscopic errors in measurement totally unrelated to the uncertainty principle. Therefore while the uncertainty principle applies to all systems, its effects are noticeable only in very small systems where the value of ħ is comparable to the properties of the objects involved. As bizarre as the uncertainty principle may seem, it has been confirmed in a number of experiments.

    Another statement of the uncertainty principle involves the uncertainty in measuring a particle’s energy and the uncertainty in the time required to conduct the experiment. In mathematical form this statement is

    ΔE Δt ħ/2

    where ΔE is the uncertainty in the energy and Δt is the uncertainty in the time. Basically this statement means that we can measure the energy of a microscopic system with some precision or we can measure the time of the measurement with some precision, but we cannot measure both with great precision simultaneously.

    One application of this statement of the uncertainty principle is a process whereby a pair of virtual particles can be produced. The conservation of mass and energy (they are related through Einstein’s famous equation E = mc2) seems to prevent the spontaneous appearance of particles out of nothing. However, there is nothing else that prevents this from happening, and the uncertainty principle offers a way to get around this objection, if for only a short period of time. For instance, in empty space an electron and its anti-particle, the positron, can spontaneously form. This would introduce a violation of the conservation of energy, ΔE. Being anti-particles, the electron and positron have opposite charges so that they attract one another. As the two particles come into contact they are annihilated and release the same amount of energy that was required to create them. The energy conservation violation that occurred when the particle pair formed is exactly cancelled by the energy released when the particles annihilate. That is, there is no net change in the energy of the universe. As long as the particle pair exists for a short enough period of time, Δt, so that the product of ΔE and Δt does not violate the uncertainty principle, then this brief trifling violation of the conservation of energy/mass can occur. Such matters are called quantum fluctuations. A number of quantum mechanical effects have been interpreted as manifestations of quantum fluctuations.

    Larger violations of the conservation of energy cannot exist for as long a time interval as smaller violations. For example, since protons have nearly 2,000 times as much mass (and hence energy) as electrons, proton/anti-proton pairs produced this way can last for no more than 1/2,000 as long as pairs of electrons and positrons created by pair production. A macroscopic violation of the conservation of energy would last for such a short length of time that it cannot be observed. However, what would happen if a macroscopic phenomenon had exactly zero energy? To be more specific, suppose that the universe has total energy equal to zero? Then the universe could have come into existence and lasted for a very long period of time, because if ΔE is zero, Δt can have any finite value and still satisfy the uncertainty principle. Therefore the universe could have come into existence without violating the conservation of energy. If this were true, then the universe is no more than a quantum fluctuation.

    The trick is to find some way to make the sum total of energy in the universe equal to zero. The universe obviously contains much energy in the form of matter (E = mc2) and radiant energy (photons of all wavelengths), as well as more exotic particles such as neutrinos. There are forms of negative energy that many cosmologists think may balance all of this positive energy. The most obvious choice for this negative energy is gravitational potential energy. The gravitational potential energy for a particle near a large mass has the form

    E = –GmM/r

    where G is the universal gravitational constant, m is the mass of the particle, M is the mass of the large mass, and r is the distance of the particle from the large mass. This equation could be summed over all of the mass of the universe to obtain the total gravitational potential energy of the universe. Since the gravitational potential energy has a negative sign, all terms would be negative, and the sum must be negative as well. Therefore it is reasoned that the gravitational potential energy could exactly equal the total positive energy so that the total energy of the universe is zero.

    However there are at least a couple of problems with this. First, we do not know the variables involved well enough to properly evaluate the energies to determine if indeed the energy of the universe is zero. Therefore it is more a matter of faith that the sum of the energy of the universe is zero. A second, more difficult, problem is with the negative sign in the gravitational potential energy equation. The sign appears because the reference point is taken at infinity. All potential energies require the selection of an arbitrary reference point where the potential energy is zero. The reference point for gravity is taken at infinity for mathematical simplicity. This forces all gravitational potential energies at finite distances to be negative. Any other zero point could be chosen, though that would make the mathematics more complicated. Any other reference point would make at least some of the gravitational potential energies positive. Alternately, one could add an arbitrary constant to the potential energy term, because the zero point is arbitrary. This is true for all potential energies. In other words, one cannot honestly state that the gravitational potential energy of the universe has any particular value to balance other forms of energy.

    In his original 1973 paper on the quantum fluctuation theory for the origin of the big bang, Edward Tryon stated, “I offer the modest proposal that our universe is simply one of those things which happen from time to time.” Alan Guth has echoed this sentiment with the observation that the whole universe may be “a free lunch.” Indeed, Guth’s inflationary model depends upon a quantum fluctuation as the origin of the big bang. In the inflationary model the universe sprang from a quantum fluctuation that was a “false vacuum,” an entity predicted by some particle physicists, but never observed. While a true vacuum is ostensibly empty, it can give rise to ghostly particles through pair production. On the other hand, a false vacuum can do this and more. A false vacuum would have a strong repulsive gravitational field that would explosively expand the early universe. Another peculiarity of a false vacuum is that it would maintain a constant energy density as it expands, creating vast amounts of energy more or less out of nothing.

    The quantum fluctuation theory of the origin of the universe has been expanded upon to allow for many other universes. In this view the universe did not arise as a quantum fluctuation ex nihilo, but instead arose as a quantum fluctuation in some other universe. A small quantum fluctuation in that universe immediately divorced itself from that universe to become ours. Presumably that universe also arose from a quantum fluctuation in a previous universe. Perhaps our universe is frequently giving birth to new universes in a similar fashion. This long chain of an infinite number of universes is a sort of return to the eternal universe, though any particular universe such as ours may have a finite lifetime. This idea is the multi-verse mentioned earlier that has been invoked to explain the anthropic principle. In each universe one would expect that the physical constants would be different. Only in a universe where the constants are conducive for life would cognizant beings exist to take note of such things. Thus, selection of universes in which we could exist might be limited.

    Some cosmologists have suggested an oscillating universe to explain the origin of the universe. In this view, the mass density of the universe is sufficient to slow and then reverse the expansion of the universe. This would lead to the “big crunch” mentioned earlier. After the big crunch, the universe would “bounce” and be reborn as another big bang. This big bang would be followed by another big crunch, which would repeat in an infinite cycle. Therefore, our finite-age universe would merely be a single episode of an eternal oscillating universe. Some have fantasized that the laws of physics may be juggled between each rebirth.

    There are several things wrong with the oscillating universe. First, the best evidence today suggests that Ω is too small to halt the expansion of the universe. Second, even if the universe were destined to someday contract, there is no known mechanism that would cause it to bounce. We would expect that once the universe imploded upon itself, it would remain as some sort of black-hole sort of state (incidentally, if the big bang started in this sort of state, then this would be a problem for the single big-bang model as well). Third, there is no way that we can test this, so it is hardly a scientific concept.

    One last attempt to explain the beginning (or non-beginning) of the universe should be mentioned. If the universe is infinite in size, then it has always been and always will be infinite in size. As the universe expands, it becomes larger and cooler, and its density decreases. What if the universe has been expanding forever? One possibility is that the physical laws that govern the universe change as the average temperature changes. This is the essence of GUT described earlier. Most physicists think that the fundamental forces that we observe today are different manifestations of a single force that has had its symmetry broken. Perhaps in much earlier times when the universe was much hotter and denser, other laws of physics totally unknowable to us were in effect. If this were true, then what we call the big bang was just a transition from a much higher density and temperature state. The big bang would have been some sort of wall beyond which we cannot penetrate to earlier times with our physics. Before the big bang the universe would have contained unbelievable densities and temperatures, and the physical laws would have been quite foreign to us. Thus the universe has always been expanding through various transitions, and there is no ultimate beginning to explain. This, too, represents a return to the eternal universe that the big bang was long thought to have eliminated.

    Big-bang research of recent years has been in the direction of explaining the origin of the universe in an entirely physical, natural way without recourse to a Creator. Any purely physical explanation of origins without a Creator amounts to non-theistic evolution, naturalism, and secular humanism. All these ideas are antithetical to biblical Christianity. Those Christian apologists who fail to see this simply have failed to understand the direction that cosmology has taken in recent years.

    1. TL;DR. But are you _sure_ your nick isn’t “see_jerk”? (I kid, I kid.)

      Agreed, inflation makes creationism worse than astrology, insanely erroneous with at least a factor 10 on gravity influence from typical stars, and homeopathy, insanely erroneous with dilution to 60 oom or more. Creationism dilutes its purported magic with 100 oom (60 e-folds = e^60 ~ 10^30 linear scale up -> ~ 10^-90 dilution in volume) by construction.

      Biological creationism is even worse, as several common ancestors are more unlikely than universal common ancestry by > 10^2000. [Theobald, Nature, 2010.] Creationism, whether cosmological or biological or especially both – abrahamism is > 10^-90*10^-2020 ~ 10^-2100 less likely than straight up science – is now known to be the most insanely erroneous idea ever conceived by man!

      1. Oops, misplaced the ratio. That’s abrahamism making outrageous claims on nature as > 10^2100 times less likely than asking questions of nature.

  12. However, cosmologists realized that there were problems with the CMB. One of these was the horizon problem: the CMB observed from opposite parts of the sky had precisely the same temperature. But how could that be? Those positions opposite one another had never had a chance to exchange heat, so how could they have come into thermal equilibrium (i.e., have the same temperature)?

    More than 30 years ago, a theoretical physicist named Alan Guth suggested cosmic inflation to solve the horizon problem. According to the theory of cosmic inflation, 10-34 seconds after the big bang the universe briefly and rapidly expanded, or inflated, to a much larger size with a velocity far faster than the speed of light. This would allow the entire universe initially to be in thermal contact so that it could come into the thermal equilibrium before being pulled out of thermal equilibrium by inflation. Cosmic inflation had the added benefit of solving another difficulty with the big bang, the flatness problem. After much discussion, cosmologists came to embrace cosmic inflation, although there has been no evidence for inflation.

    Evidence for Cosmic Inflation?

    Today, a team of scientists announced what they think may be the first evidence for cosmic inflation. This work is based upon a certain kind of polarization in the CMB. Like any other electromagnetic radiation, the CMB is a wave phenomenon. Most waves vibrate in all directions, but sometimes waves can vibrate more in one direction than in others. If so, we say that the wave is polarized. Electromagnetic waves can be polarized different ways. Different physical mechanisms can polarize electromagnetic waves differently, so by studying how and to what degree the radiation is polarized, we can gain clues as to what physical mechanisms may have been involved.

    According to the big bang model, cosmic inflation may have imprinted a certain kind of polarization in the CMB, and several experiments are now operating to look for the polarization predicted by these models. Today’s announcement is the preliminary result of one of these experiments. However, cosmic inflation is not a single theory, but rather it is a broad theory with an infinite number of variations. Thus, it may not be proper to claim that this discovery proves inflation. Rather, it may merely rule out some versions that cannot be true.

    Our Response

    This announcement undoubtedly will be welcomed as the long-sought proof of cosmic inflation so necessary to the big bang model. Biblical creationists know from Scripture that the universe did not begin in a big bang billions of years ago. For instance, from God’s Word we understand that the world is far younger than this. Furthermore, we know from Genesis 1 that God made the earth before He made the stars, but the big bang requires that many stars existed for billions of years before the earth did. So how do we respond to this announcement?

    First, this announcement may be improperly understood and reported. For instance, in 2003 proof for cosmic inflation was incorrectly reported and a similar erroneous claim was made last year. Second, the predictions that are being supposedly confirmed are very model-dependent: if the model changes, then the predictions change. Inflation is just one of many free parameters that cosmologists have at their disposal within the big bang model, so they can alter these parameters at will to get the intended result. Third, other mechanisms could mimic the signal being claimed today. So, even if the data are confirmed, there may be some other physical mechanism at play rather than cosmic inflation.

  13. “inflationary Big Bang theory” its a theory. not proven. wow you guys are just stuck on your religion, its like you have blinders on. listen to your language. its all speculation, could be, maybe, might be, its not even proof. but yet your willing to die for it. that is religion a blind faith indeed.

  14. Matt: Question. Is the theoretical model relating gravitational waves to B-mode polarization the only believable model now or are there alternate models to produce polarization? I can understand polarization produced by scattering from atoms or electrons, but polarization produced by gravitational waves is probably a new phenomenon for physics (as far as I know). Also, if the results of BICEP2 are confirmed then, is that it for the inflation model and only remaining issue would be to select who would get Nobel prize?

  15. I would like to ask something:
    I know that BICEP2 works like a bolometer. How does it measure polarization though?
    Thanks in advance

    1. Each of their “pixels” is in the form of a pair of perpendicular dipole antennas, each one measuring one of two polarization states. When they compute the sum of the contributions from each antenna, they get the total amplitude (temperature) at that pixel. When they compute the difference between the two antennae, they get the actual polarization of the signal at that pixel.

      See the BICEP2 preprint, arXiv:1403.3985

  16. In one of today’s press interviews, I thought I remember one of the authors saying they were confident in this data because they saw the same pattern in an earlier-generation experiment. Has there been any elaboration on that somewhere?

  17. if the universe was void and a big bang did occur where did matter like soil, rocks and star dust come from. Explosions usually destroy not create matter…. did that just happen to appear??

    1. The universe didn’t exist so it wasn’t void. Furthermore try to think of it as an expansion, not an explosion.

    2. Please read these first

      http://profmattstrassler.com/articles-and-posts/relativity-space-astronomy-and-cosmology/history-of-the-universe/

      http://profmattstrassler.com/articles-and-posts/relativity-space-astronomy-and-cosmology/history-of-the-universe/inflation/

      http://profmattstrassler.com/articles-and-posts/relativity-space-astronomy-and-cosmology/history-of-the-universe/hot-big-bang/

      and note the Big Bang was not an explosion — http://profmattstrassler.com/articles-and-posts/relativity-space-astronomy-and-cosmology/history-of-the-universe/big-bang-expansion-not-explosion/ — though it is still true that the Hot Big Bang would have destroyed anything that somehow managed to survive what came before it.

      If you still feel you don’t understand, please feel free to ask your question again. I’ve probably answered part of it, but maybe not all of it.

  18. Is it possible to deduce from this detection of the primordial B-mode polarization of the CMB the intensity of the gravitational wave background at recombination? This could tell us, after adjusting for Hubble expansion, whether there is hope of, some day, observing the primordial gravitational waves directly though, say, a pulsar timing array? Maybe this possibility can be ruled out now

    PS: sorry I posted this comment in your previous post, but meant to do it here.

    1. The answer to your first question is yes. I don’t know the answer to the second question. But in any case, this gravitational wave signature, if true, is surprisingly big, not small, so a second method for detection is more likely to be ruled in than out.

      1. Thank you for the prompt answer!! Well done for your coverage of this topic, and for the blog in general.

      2. No luck finding numbers yet, but as far as I can tell even the COBE bound on inflationary gravitational waves is way below plausible detectablility with a pulsar timing array. Which in fact means it’s probably dominated by the binary supermassive black hole stochastic background, so detecting inflationary GWs would mean pulling them out from that noise above and beyond any sensitivity concerns.

        1. Big Bang Theories have a long history going back to tails invented even before written records; these theories are firmly rooted in Creation Stories from around the world. Science Fiction served to blend and modernize these ancient myths into socially acceptable and politically viable urban legends until a reasonable truth was discovered in what was first thought to be a build up of guano. This amazing process which may have started smaller than can be imagined by the human mind then grew to the size of the Spirit-Father-Beaver’s tail (or even larger) faster than the blink of an eye. This would have destroyed any critter or heavenly object in the area but only then could the big “bang” start; and of course this need not be a bang (explosion) unless that is a part of your core belief. This is known as “inflation” and must cover a big big area.

          1. I don’t think you understand the role of math in modern science. There is a big difference between an urban legend and Alan Guth’s solution to the Einstein equations, or ancient myths and the difficult, pages-long calculation of the ratio of the amount of lithium and helium to hydrogen produced in the Big BAng. Nobody believes in the (modified) Big Bang or inflation because of words and stories. They believe in it because of precise calculations and detailed comparison with observation. There is no precedent for this in human history.

      1. In my opinion, that spectral index does not prove the inflation. It may depend on inflation details if the latter took place, but nothing else can be inferred. Inflation is a forced hypothesis to make ends meet.

        1. If you want proof do math, not science. The observation matched the prediction, that’s called evidence.

          1. Exactly. The issue remains, however, that it’s not yet exceptionally convincing evidence. We need this evidence to be checked, and we’ll need more precise evidence.

  19. Assuming these results are confirmed is it a blow against the ‘Big Splat’ theory of creation where the universe is created by the collision of two membranes? I seem to remember reading that this theory produces less gravitational activity than inflation.

  20. I have read that if we could study gravitational waves we could learn much more about the very earliest eras of the universe, perhaps what happened at the very beginning. How? What information would they provide, or is it all speculative?

    1. That’s exactly what happened today. BICEP2 is indirectly detecting gravitational waves, and those waves are telling us about what may have preceded and generated the Hot Big Bang itself. That’s usually what people mean when they say “the very earliest eras”.

      Other forms of gravitational waves could potentially tell us other things too, perhaps from even earlier times… but that’s a lot more speculative.

  21. Any idea as to whether this will be able to constrain the polarization structure of the gravitational waves? (General Relativity of course only has two senses of polarization, but general metric gravity theories can have up to 6.) Any constraints on this number would provide a powerful insight into the true theory of gravity.

    1. I think the answer is probably no–in the standard story B modes in the CMB are generated by _tensor_ perturbations in the metric, or more fancily by helicity 2 polarizations. The other 4 possible polarizations you are thinking of are 2 scalar (helicity 0) and 2 vector (helicity 1) modes, rise to B modes. For example, the scalar modes in the metric give rise to the density fluctuations in the temperature that we already observe in the CMB, but the scalar modes do not induce B mode polarizations. If GR was wrong and the scalar modes became dynamical degrees of freedom, we might see extra temperature fluctuations, but we wouldn’t expect to see B modes. It is precisely the tensor character of the gravitational modes that gives you electromagnetic B modes in the first place: you can’t generate B modes with a dipole anisotropy, you need a quadrupole anisotropy, i.e. you need a tensor mode.

  22. Question: is it considered possible for gravitational waves to travel faster than light? I know the general relativity suggests it would be limited much like light, but if gravity is curved space, and space expanded FTL during inflation, then the wave emanating from inflation would concomitantly have travelled FTL.

    Please advise.

    1. As I emphasized here, http://profmattstrassler.com/articles-and-posts/relativity-space-astronomy-and-cosmology/history-of-the-universe/big-bang-expansion-not-explosion/, you must be much more precise about what you mean by “travel faster than light”.

      If you ask: can a gravitational wave at the other end of the universe rush away from me, here on Earth, faster than the speed of light, the answer is — yes, it can.

      If you ask: can a gravitational wave, as it passes me, move past me faster than the speed of light, the answer is: no, it cannot.

      Only the latter situation is constrained by Einstein’s theory of relativity.

      I would say that “to travel” means “to pass through regions of space, passing other objects along the way”. The gravitational wave cannot pass any object faster than light can.

      In fact, if you were to say that gravitational waves can travel faster than light because they can separate from a distant object faster than the speed of light, then you are also forced to conclude that light can travel faster than light — because electromagnetic and gravitational waves will travel in the same way. During inflation, only a small region of space is visible, because all electromagnetic waves from objects outside that region are dragged away, by the expansion of space, faster than the speed of light. That makes it impossible for those electromagnetic waves, traveling at the speed of light relative to things at their own location, to ever enter the small region of space that we can see.

  23. in second picture, grenn look like mirror version of blue but not clasical right/left but up down that is accident or maybe sign of something else ??

    1. No, that’s pure accident. The dots should be the sum of the lower dashed curve and the solid curve; and if you look carefully (do be careful, because this is a log plot) the dots in the blue curve are much higher than they should be.

  24. The apparent contradiction between the BICEP2 r vs n_s plot and the Planck one is caused by different assumptions in the modelling. The Planck plot assumes no “running” of the spectral index of the primordial fluctuations (the simplest case), while the “Planck” contours plotted by the BICEP collaboration allow for running, that is, it allows the primordial power spectrum of fluctuations to depart from a pure power law. This extra degree of freedom loosens the Planck constraints, and seems to be required if both the Planck and BICEP experimental results are correct.

    1. Thank you for that clarification!

      Kev Abazajian has complained that they did it wrong, by not allowing running of the running, which inflation always requires.

  25. I do not think I would be wrong to say that we can now join Helioseismology and Gravitational waves together to look at WMAP in a new and interesting way?

    These insights also include other people that have not been considered here?

    Soho development needed to see what the sun was doing, and to see this application used in WMAP serves a greater purpose, much as we gain in perception about our own sun. So you can see where such contributions to the subject of inflation has been provide interesting tools to ascertain new information about our cosmos.

  26. “We now have irrefutable evidence of Cosmic Inflation” – a quote which, I am sure, will show up in some Discovery Channel debacle (COSMOS, perhaps?) in the coming weeks.

    1. No we don’t. Not until this measurement is confirmed, we don’t. I see a lot of issues to worry about right now; and David Spergel, one of the world’s experts on this subject, concurs, so I am increasingly concerned.

    1. When there’s been only one measurement, not yet confirmed independently — and when the data is still too weak to allow an unambiguous scientific interpretation in terms of gravitational waves from inflation — it is wise to remain wary.

  27. “This has been like looking for a needle in a haystack, but instead we found a crowbar,” said co-leader Clem Pryke (University of Minnesota). Sounds definitive.

    1. Unless, of course, the crowbar is actually made of plastic instead of metal. These are still early days; it’s definitive that they found something, but I’m not sure it’s definitive yet as to what it is.

      1. Oh, absolutely! I’m reading your ongoing post and the press release and paper – it’s all very exciting but i’ll try to ‘curb by enthusiasm’ and wait for their “Systematics” paper and other related stuff too. 🙂

  28. The BBC, among others, are publishing diagrams of the pattern. Is the meeting moving onto theories/speculation about what this might mean, or are they keeping to the pure discovery?

Leave a Reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC

Related

On April 8th, 2024, a small strip of North America will witness a total solar eclipse. Total solar eclipses are amazing, life-changing experiences; I hope

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/14/2024

Recently, a reader raised a couple of central questions about speed and relativity. Since the answers are crucial to an understanding of Einstein’s relativity in

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/06/2024