Of Particular Significance

Another Storm Predicted

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 11/05/2012

The greater New York region, having been broken into disconnected and damaged pieces by Hurricane/Nor’Easter Sandy, is still reassembling itself.  Every day sees improvements to electrical grids and mass transit and delivery of goods, though there have been many hard lessons about insufficient preparations.  Here’s an impressive challenge: over a million people and thousands of businesses lack electrical power; therefore many of them are running generators, to stay warm, keep food cold, and so forth; but the generators require fuel, typically diesel or gasoline; and so there is a greater need for fuel than usual; but a significant fraction of the petrol stations can’t pump fuel for their customers… because they lack electrical power and don’t have their own generators. These and other nasty surprises of post-storm recovery should be widely noted by policy makers and the public everywhere, especially in places that, like New York when I was a child, rarely experience disasters.

Unfortunately, another storm (a simple nor’easter) is now forecast for mid-week. While much weaker than the last, it is potentially still a dangerous situation for a region whose defenses are still being repaired.  As was the case with Sandy, the new storm was already signaled a week in advance by the ECMWF (European Center for Medium-range Weather Forecasting), the current European weather-forecasting computer program or “model”.  Confidence in the prediction has been growing, but still, predictions so far in advance do change.  Also one must keep in mind that a shift in the storm’s track of one or two hundred miles or so could very much change its impact, so the consequences of this storm, even if it occurs, are still very uncertain.  But again we are reminded, as we were last week, that weather forecasting has dramatically improved compared to thirty years ago; the possibility of a significant storm can now often be noted a week in advance.

What is this European ECMWF model? what is its competitor, the US-based GFS (Global Forecast System) model? And what about the other models that also get used?   All of these are computer programs for forecasting the weather; all of them use the same basic weather data as their starting point, and all have the same basic physics of weather built into their computer programs.  So what makes them different, and more or less reliable than one another?  I asked one of my commenters, Dan D., about this after my last post.  Here’s what he said, along with my best (and hopefully accurate) attempts at translation for less experienced readers:

Both the GFS and ECMWF models are global models based on the primitive equations (i.e. the Navier-Stokes equations plus the thermodynamic energy equation and mass conservation equation), and both initialize their grids with mostly the same atmospheric data gathered from around the globe.

Editor’s translation: this means that they use the basic physics equations for the motion of fluids (such as air and water vapor), of energy, and of mass; and they both start with the same type of weather data taken from around the world.

One main difference between the two is the grid resolution. The GFS runs at a grid resolution of roughly 27 km in the horizontal, while the ECMWF runs at roughly 16 km (technically both don’t actually use grids, but rather spectral decomposition; the above are the effective grid spacings). All other things being equal, having a finer grid spacing will generally improve your forecast, since you can resolve more of the relevant weather features at smaller scales. This is assuming that your assumptions about the subgrid scale features are still valid when you change the grid resolution, which is a big assumption. There are other caveats, but I won’t go there.

Editor’s translation: the atmosphere is far too complicated to be simulated in great detail by a computer program; there just aren’t big enough computers for that.  Instead, every computer program is designed to make approximations and simplifications.  The approximations made by the ECMWF allow the program to keep track of more of the small details of what the atmosphere is doing, compared to the GFS; this may allow the ECMWF to be more accurate, although this depends on whether its assumptions are ok in other ways.

The other big difference between the ECMWF and the GFS, and probably the more significant one, is the method of initialization. Both use statistical variational techniques whereby weather observations are statistically optimally combined (in the least-squares sense) with a background “guess” field for the model state variables (such as temperature, moisture, wind speed and direction, etc.). The GFS takes observations centered around the given initialization time (such as 0000 UTC) and makes an analysis at a single time, using a technique known as 3DVAR. This analysis is then fed into the model grid and the forecast is launched from there. The ECMWF does a very similar thing, except that instead of utilizing an analysis at a single time, it actually uses an enhanced technique known as 4DVAR, wherein the model itself is run forward and backward in time over a certain interval (I think it’s on the order of 6 hours or so, I’d have to check), assimilating observations from within that entire window. The forecast “trajectory” is optimally corrected over several forward-backward iterations to best fit the “trajectory” of the observations within that time window. The final analysis valid at the end of that window is then used to make the subsequent forecast. Because 4DVAR makes use of a longer time window and tries to correct the model forecast trajectory, while 3DVAR merely tries to correct an initial guess valid at a single time, the former generally yields a much more accurate final analysis. The disadvantage is that 4DVAR is far more computationally expensive than 3DVAR, and one needs to create an adjoint (backward-in-time version) of the entire forecast model code, which is nothing short of a nightmare for these very complex codes.

Editor’s translation: a useful analogy here would be this: it is as though the GFS is looking at a snapshot of the atmosphere and using the information to predict the future, while the ECMWF is looking at a short video of the atmosphere and making sure that its prediction of the future is consistent with the whole video clip.  This technique, while difficult to implement, assures that ECMWF is more accurate.

Both of the above contribute to generally superior forecasts by the ECMWF for most situations (in fact, I think a study was done not so long ago in which the ECMWF 4DVAR was used to initialize the GFS forecast model; the results showed a significant improvement in the GFS forecasts, nearly the same accuracy as the typical ECMWF. I’ll see if I can dig it up). The next obvious question is why the GFS doesn’t follow suit with using 4DVAR and higher resolution. The reasons are complex, but partly because (as I see it, at least) the ECMWF center only has to deal with one model, while the U.S. weather enterprise is concerned with multiple models at different scales, and thus has to spread their resources more thinly. The full GFS output is also freely available to anyone, while the ECMWF output is not. Personally, I’d like us in the U.S. to focus on a more unified weather modeling framework, while still keeping everything open access.

Editor’s note: I don’t really know exactly what this means, but it sounds as though the US system is trying to do more than the European one, while the Europeans, while being more focused, have been able to develop more advanced simulation techniques.  Not sure about this.

Finally, what makes the modern versions of the GFS/ECMWF so much better than their predecessors is a combination of increasing model resolution, better numerical solution techniques, better physical parameterizations of clouds and precipitation, radiation, surface fluxes, and the like, and better initialization procedures (such as 4DVAR). The basic equations that the models are based on, however, have not changed much, only the solution details and how we handle the complex parameterizations of the other important physical phenomena not directly related to fluid flow.

Editor’s translation: Although the basic physics that goes into the newer simulation tools isn’t that different from what was in the older ones, there have been many incremental improvements, on many different fronts.  In other words, it was many small steps, rather than one big one, that makes the newer models better.

Share via:

Twitter
Facebook
LinkedIn
Reddit

20 Responses

  1. Once the form is filled out you can drop it off or mail it in.
    So it’s a great way to let people know what you currently
    have for sale on e – Bay. And all this really is carried out by indicates of the state-of-the-art multi-touch display.
    Instead of the postage meter mark you’re utilised to seeing,
    your pc print a bar code referred to as an “Info-Centered Indicium’s,” Encrypted in this bar code is the
    amount off postage purchased and the destination zip code.

  2. What was clearer was that the experience of being in the wilderness,
    out under the stars, or surrounded by tthe natural world of mountains, plants, rivers,
    and seeas had a positive effect on those who did it.
    This basically implies that you can have several
    of these rubber stamps andd use a singpe ink pad for all of
    them. Once you’re sure that the frame is smooth and clean, you could apply paint or varnish.
    Cleaning up with diaper liners that are biodegradable is very easy and doesn’t create a big mess.
    Skin mole Holds, to keep things clamped whilst you have both your hands free regarding other things.
    True, drawing ith hot glue isn’t the easiest challenge but, after practicing a
    little, you’ll get more accustomed to it.

  3. I drop a leave a response each time I appreciate a post on a webwite
    or I have something to add to the conversation.

    Usually it is triggered byy the sincerdness communicated in the article I read.
    And oon this artiocle Another Storm Predicted | Of Particular Significance.
    I wwas actually moved enough to create a commenbta
    response 😛 I actually do have skme qudstions for you if it’s okay.

    Could it be just me or does it give the impression like a few of these remarks look as if they
    are coming from brain dead visitors? 😛 And, if you are writing on other
    online social sites, I’d like to follow you. Would you ist every
    one of your shared sites like your twitter feed, Facebook ppage or linkedin profile?

  4. What is an appropriate measure – that would make sense to a non-specialist – of the computing power needed to run these models? Why can’t the NWS run models at multiple scales? What is the limiting constraint? Dollars is the obvious answer; but given the damage a Sandy can cause, and how much that damage might be ameliorated by more advance warning, limiting dollars in this effort is penny wise pound foolish, no?

    1. To obtain better weather forecasts, resources need to be allocated to this science in a broader sense then just for computer power.
      1) One limiting factor for the forecast models is that the time to run the forecast must be signifcantly shorter than the forecast period; otherwise, the forecast will already be outdated when it is completed. For example, running a forecast for the next 48 or 72 hours should not take more than some 8 or 12 hours. This available time puts a limit on the complexity of the forecast, considering a given speed of computers.
      2) Just using more and faster computers alone is not sufficient to significantly increase the quality of the forecast. While the basic physical equations are well known and the same for all models, a forecast needs to make simplifications of the physics to be able to complete the calculation in a reasonable time. Making the right simplifications, i.e., neglecting the less-important effects but not throwing out the important effects, needs a lot of experience and also several trial-and-error experiments. A permanent commitment to maintain a team of experienced scientist is required. (Also please note, as explained in my other comment above, that doubling the resolution of the forecast model requires about 16 times the computer power, so there is only a limited gain from spending more money on computers alone.)
      3) A weather forecast calculated by a computer model needs real observations as the starting point. The availability of observations (measurements) about variables such as temperature, wind speed, and humidity puts a limit on the achievable quality of the forecasts. While measurements of the surface temperature on land are provided by a network of meteorological stations, fewer data are available over the oceans and for the upper atmosphere. The number of meteorological stations in general has decreased over the last decades, and is only partly compensated by more satellite-based observations. A well-planned combination of observations and computer models is necessary for realistic forecasts.
      In general, I fully agree with the statement that investing in better weather prediction and climate research is certainly to the benefit of ourselves and our children.

  5. That’s correct, Dr. Strassler. The National Center for Environmental Computing (NCEP) does indeed have a larger mission requirement than the ECMWF, at least twice as large.

    NCEP runs the GFS model, a number of regional (smaller domain) weather forecast models, some are focused for fire weather and others for hurricane track and intensity, an air quality model, and an ocean wind-wave model. In addition, at lower resolution, multiple runs of the GFS and regional models creating ensemble forecasts. And all of these models runs are repeated at 6 hour intervals. And in the time remaining on NCEP supercomputers, a coupled ocean/atmospheric model for seasonal/inter-annual climate forecasts.

    The ECMWF runs their fabulous global model, a 51-member forecast ensemble, and an ocean wave model just twice per day, and their seasonal/interannual climate model. Their business model allows them to have twice the staff as NCEP and a far more powerful computing system. As Dan already explained so nicely, this allows for much more sophisticated weather model and data assimilation system.

    I work within the NWS. I am late to the party. I come to this blog to learn a little bit about esoteric particle physics from a great teacher but instead found some nice posts about our recent forecasts for Sandy. Thanks for the kind words and praise, Dr. Strassler — many of us within the NWS/NCEP love its mission and the job. Kudos to Dan as well for explaining things in such a clear way.

  6. Dr. Strassler,

    I’m flattered that you highlighted my comment in such a manner! I pretty much agree with all your “Editor’s notes”. Thanks for discussing these things on your blog and supplying some positive exposure for the meteorological community. It is much appreciated!

    1. Thanks for your correction about tornadogeneisis.

      The reason for saying that not all of the needed data was meteorological is that some of it is oceanographic, and also because some people designate as meteorological only what pertains to the lower and middle troposphere, although the upper troposphere is sometimes included. For those who follow that custom, data from higher regions is designated as ‘atmospheric’, despite its meteorological relevance. That is probably why the quote in Dr. Strassler’s original post carefully used the adjective ‘atmospheric’.

      1. Thanks for the explanation, but I have never met a meteorologist that didn’t include the upper troposphere or even the stratosphere within the realm of Meteorology. Not a big deal, though :).

  7. A nice summary.

    Here are a few comments.

    The ‘atmospheric data gathered from around the globe’ includes a lot of data from both polar orbiting and geostationary weather satellites. (The two types of satellites have complementary advantages.) Some of the important data might not be characterized as meteorological: it concerns the upper atmosphere, especially the jet streams, upper-level fronts, almost everything about the atmosphere over the oceans, stratospheric ozone (for the UV intensity at the surface, and also for the temperature profiles in the lower stratosphere, the height of the tropopause, the degree of mixing between the upper troposphere and the lower stratosphere), the amplitude of gravity waves (buoyancy waves, often triggered by strong winds encountering mountain ranges), which can exert considerable drag, and oceanographic data (temperature profiles in the upper ocean greatly affect whether a tropical storm or hurricane weakens or strengthens). Particularly desired: better data on wind shear (the change of the strength and direction of the wind with altitude). Wind shear can snuff out a hurricane or tornado, by disorganizing it.

    The community already knows a lot of the physics that would improve forecasts, but lacks the computing power to include it in the data-assimilation and forecasting calculations. The ECMWF has better computers than NOAA and the DoD weather forecasting services. This is purely a funding issue, and it has been this way for years. Fortunately, the various US, Canadian, and European forecast services share data and coarse-grained forecasts.

    Some of the US forecasting services have been transitioning to 4DVAR, for several years.

    All of the data-assimilation and forecasting groups keep up-to-date statistics on the the accuracy of their data products, and use it to test whether any change in their algorithms helps or hurts.

      1. “The ‘atmospheric data gathered from around the globe’ includes a lot of data from both polar orbiting and geostationary weather satellites. (The two types of satellites have complementary advantages.) Some of the important data might not be characterized as meteorological: it concerns the upper atmosphere, especially the jet streams, upper-level fronts, almost everything about the atmosphere over the oceans, stratospheric ozone (for the UV intensity at the surface, and also for the temperature profiles in the lower stratosphere, the height of the tropopause, the degree of mixing between the upper troposphere and the lower stratosphere), the amplitude of gravity waves (buoyancy waves, often triggered by strong winds encountering mountain ranges), which can exert considerable drag, and oceanographic data (temperature profiles in the upper ocean greatly affect whether a tropical storm or hurricane weakens or strengthens). Particularly desired: better data on wind shear (the change of the strength and direction of the wind with altitude). Wind shear can snuff out a hurricane or tornado, by disorganizing it.”

        A couple things. First, you are right about the complementary nature of polar orbiting and geostationary satellites. We definitely need both; every source of data helps!

        Second, I’m not sure what you mean when you say that “some of the important data might not be characterized as meteorological”, because pretty much everything in your list *is* in fact meteorological.

        Third, wind shear is actually very important for tornado formation. Our best theories for tornadogenesis absolutely rely on the presence of strong wind shear (specifically, vertical shear of the horizontal wind) through a deep layer of the troposphere. You’re correct, though, that wind shear tends to be detrimental to hurricanes.

  8. I like your simplifications here. Especially the part about “looking at a short video” sounds both accurate and easy to understand.

    ” it sounds as though the US system is trying to do more than the European one, while the Europeans, while being more focused, have been able to develop more advanced simulation techniques.”

    It seems likely the more advanced techniques would be available to the Americans if they wanted to use them. My uninformed guess would be that they are limited by how much computing power they have and need to use simpler techniques because they do more.

    1. Regarding your last paragraph, I think you are completely correct in your assessment, but I must clarify that I don’t work directly with the operational numerical forecasting community, and so the quoted comments of mine are my impressions only as a research meteorologist who is an end-user of the operational products (I often use them to help initialize our own research-grade models specialized for small-scale weather). Someone who actually works in that community may have a somewhat different assessment, but I suspect that this is a common sentiment.

      1. To clarify further, I meant the quoted comments of mine regarding only the reasons why I think the GFS isn’t as sophisticated as the ECMWF. Again, there may be other more pertinent reasons that I don’t know about.

  9. Over the last thirty years computer power has increased exponentially following Moore’s law.

    Would one expect the accuracy of weather forecasting to increase in the same way ?

    Or is it the nature of the physical equations that means double the computer power does not lead to double the accuracy of the forecast ?

    1. No to the first question, yes to the second. The scaling of accuracy is definitely not one-to-one with scaling of computer power. Part of the reason for this is the intrinsic limits of predictability of weather due to chaos, and the fact that we only observe the atmosphere in a very limited fashion, so the initial conditions of our models are only approximations to the real state. The physical parameterizations of atmospheric processes are also only approximate and in some cases very poorly understood.

    2. Concerning your first question: Climate models run in 4 dimensions: 3 dimensions in space and one dimension in time. Improving the model accuracy by a factor of 2 requires doubling the computer power for each of the 4 dimensions, i. e., by a factor of 2 x 2 x 2 x 2 = 16. Analogously, improving the accuracy by a factor of 4 requires more computer power by a factor of 4 x 4 x 4 x 4 = 256. In summary, the model results improve far slower than the increase in pure computer power suggests. Computer power doubles roughly every 18 months, but improvements of model results by pure increase in computer power takes 4 x 18 months, or 6 years.

      But increasing the accuracy of the model prediction can not only be achieved by using more computer power. A significant potential is using more intelligent algorithms.

      Concerning your second question: Physical phenoma occur on different scales of time and space. For example, a typical cyclone may have a spatial (horizontal) extension of about 1000 km, and a life-time of a few weeks. These large-scale features that are already present in the climate models can be better predicted if more computer power is used. On the other hand, if you look more and more into details of finer resolution, meanings smaller areas (a few kilometers) and shorter times (some minutes), more physical phenoma that are not yet (fully) included in the model become important. This means that with finer resolution, more physical aspects (like small-scale turbulence) need to be taken into account, and the physical equations used in the model become more complicated. In summary, more computer power contributes to better predictions, but cannot be the only means to achieve it.

Leave a Reply to MarkCancel reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC

Related

I recently pointed out that there are unfamiliar types of standing waves that violate the rules of the standing waves that we most often encounter

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/25/2024

Recently, a reader raised a couple of central questions about speed and relativity. Since the answers are crucial to an understanding of Einstein’s relativity in

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/06/2024