Thursday, November 30, 2023

Why Model Track Forecasts do NOT Depend on Initial Intensity

The Question

Konno-san asked the following in his 6 June 2023 post to lists.tstorms.org:

How can ECMWF be the best in the world at forecast track while not being much good at intensity?

Some Answers


Sim Aberson and Mark DeMaria suggested a couple of reasons:
  • DeMaria (1985) barotropic model study showing a stronger dependence on outer wind structure ('strength' and 'size') than inner-core intensity
  • Intensity (wind) dependence on pressure profile and model resolution
  • Steering-flow dependence on intensity -- more intense storms move with a deeper mean flow (e.g., 850-400 hPa) than weaker storms --> track forecast depends more on the accuracy of the larger-scale, steering flow (DeMaria et al., 2022)
  • Variation in steering flow with basins?

First considerations


Initial and short-range (0-24 h) TC motion can be considered a combination of: 1) internal vortex dynamical motion (e.g., beta drift); and 2) steering (advection).  Vortex initialization will dominate the beta-drift component (~1-3 m/s) and can be significant even if smaller than the steering flow.  The role of vortex structure in TC motion was my PhD dissertation work with Russ Elsberry (Fiorino and Elsberry, 1989a and 1989b).  Our main finding is that the flow in the 300-500 km annulus strongly affects both speed and direction of beta drift.

For a model to make a good 12 & 24 h track forecast both the initial and forecast large-scale flow has to be accurate as well as the initial vortex flow in the 'critical' 300-500 km annulus.  The inner-core (intensity) largely does not affect beta drift.  

To understand the role of initial beta drift in model motion, we need to define a metric for a 'good' TC vortex initialization.  Conventionally, it is the initial position error (IPE) and initial intensity error (IIE) -- what a forecaster might desire of a vortex initialization -- the model has the storm where I analyze it to be with my estimate of intensity.

Before showing results from the trackers it is necessary to understand what a model TC forecast is and is not and that the model does not forecast TCs directly.  The 'model' TC forecast comes from a tracker that takes in model data and outputs TC tracks

There are many model issues that need to be understood before analyzing model TC tracks.  

First, NWP models do not forecast the 10-m wind, it is a diagnosed quantity from the model prognostic variables (and sometimes similarity theory).  

Second, the difference between the model spatial grid and analysis grid used in the tracker.  Rarely does a tracker use native-resolution grids, especially for Gaussian and limited-area model grids.  The tracker most commonly uses post-processed, lat-lon grids.  

Third, the time dimension of the model grid.  The instantaneous value of pressure and especially wind is a mean over a time step which can be as small as 10-15 s and is even more complicated with models that use different time steps for dynamics and physics.  

These model issues are important when interpreting model tracker output especially because of differences in model tracker algorithms.  There are always some apples-v-oranges problems.  

Finally, it is important to verify the raw output of the model tracker.  In operations the tracker output is both post-processed (bias correction) and interpolated in time to be consistent with the time when the advisory/warning is issued.  For example, the 06Z NHC forecast will not have 06Z model runs available, instead 00Z runs will be used.

The Models


We will consider three modeling systems for the period 2020-22 (4-character ATCF ID in BOLD): 

  • AVNO - GFS (the American global model)
  • HWRF - GFS-HWRF (dynamical downscaling of the GFS using the HWRF limited-area model)
  • ECMWF tracks.  The three ECMWF tracks are:

    • TECM5- GFDL tracker of the deterministic run of the IFS; 
    • EMDT - ECMWF tracker of the deterministic IFS run; and
    • TERA5FDL tracker of the ERA5 reanalysis forecasts.  

The most significant feature of the modeling systems is that only GFS-HWRF explicitly analyses the TC vortex.

Bottom line (up front)


  • HWRF has almost no bias in the initial intensity and the smallest IIE & IPE -- a 'good' vortex initialization(?).  
  • Conversely, ECMWF has the largest IIE & IPE, but the lowest  12- and 24-h mean position errors (PE)!  
  • Also noteworthy is that the ECMWF ERA5 reanalysis has lower error than the ECMWF IFS operational runs.

The big conclusions:

  • the HWRF vortex initialization is NOT good -- the TC must be analyzed as part of the large-scale flow
  • the global model 12- and 24-h PE are incredibly small.


The GFDL & ECMWF TC Trackers


The de facto standard tracker for the US operational models is the GFDL vortex tracker (Marchok 2021) and the code is available here.


The new ECMWF tracker described ECMWF tracker 2012 (page 17) was recently updated for improved analysis of the surface wind field and sea-level pressure in Enhancing tropical cyclone wind forecasts | ECMWF.  The new tracker is an update to these trackers:

and what is most distinctive about the ECMWF tracker is the use of a lower grid resolutions in finding the TC center (~200 km)  and full native resolution (~9 km) to find the max 10-m surface wind (intensity) and the wind radii.   The GFDL tracker uses a single resolution.

The grid resolution of the 3 global models using the GFDL tracker (AVNO, ECMF and ERA5) is 0.25 deg.  The resolution of the GFDL tracker grid for HWRF is near native ~3 km.

Comparing ECMF (gfdl) v EMDT (ecmwf) shows the effect of the tracking algorithm, whereas comparing ECMF (gfdl) v ERA5 (gfdl) shows the effect of the modeling system. 


All statistics are homogenous.  The bar charts include a table with the value and the counts in [] and a box-whisker (min / 25% / 75% / max) for the error distribution with the thick black line indicating the median.


Fig. 1 gives the mean PE (bars) for the standard forecast times of taus of 0, 12, 24,36, 48, 72 (3 d), 96 (4 d) and 120 (5 d).  Note how the mean PE of the GFDL tracker (TECM5) is lower than the ECMWF tracker (EMDT)



Figure 1.  Effect of tracker algorithm (EMDT v TECM5) and modeling (TECM5 v TERA5) on Position Error (PE).  The distribution is shown as a box-whisker.  The black line is the median.
  

To display how the ECMWF tracker degrades or has higher PE than the GFDL tracker using the same model output.

The % improvement of PE of model1 (PE1) relative to the PE of model2 (PE2 is:

%improve = -((PE1 - PE2)/PE2))*100%

When PE1 <  PE2 then the %improve is positive (a lower PE is good).  In Fig. 2 below we see how the ECMWF tracker produces higher PE than the GFDL tracker (negative %improve).  Both trackers use grids taken from the same native resolution model which implies grid resolution is important in finding the TC center/position.


Figure 2.  % improvement (lower PE) of ECMWF tracker relative to GFDL tracker.

Now consider the effect of grid resolution on intensity in Fig. 3 below:

Figure 3. As in Fig. 1, the effect of tracker algorithm (EMDT v TECM5) and modeling (TECM5 v TERA5) on Intensity Error (IE).  The lines are mean absolute IE, the bars and box-whisker the error itself.

Whereas the ECMWF tracker degraded PE, the intensity error (mean absolute IE) is only about1 kt lower which can be attributed to using 0.25 deg (~21 km) grids in the GFDL tracker compared to ~9 km native resolution in the ECMWF tracker used to find the max wind.  The bias (mean IE) is about 2 kt lower again showing how higher resolution is better for analyzing the wind fields.


These results suggest that ECMWF should use a higher-resolution grids for find the TC center.


The ERA5 IE are about 5-7 kts higher, but in Fig 4. below we find no effect on PE for taus 0-36 h and a slight degradation at days 3-5.


Figure 4.  As in Fig. 1.  The effect of modeling on PE (IFS v ERA5).  The bars are the %improve of EMDT (ecmwf)  and TERA5 (gfdl) over TECM5 (gfdl).

The ERA5 initial PE is higher, but it is very impressive how the lower-resolution model of ERA5 is better or even out to tau 48 h compared to the more recent and higher-resolution IFS.


References


DeMaria, M., 1985: Tropical Cyclone Motion in a Nondivergent Barotropic Model. Monthly Weather Review, 113, 1199–1210, https://doi.org/10.1175/1520-0493(1985)113<1199:TCMIAN>2.0.CO;2

DeMaria, M., and Coauthors, 2022: The National Hurricane Center Tropical Cyclone Model Guidance Suite. Weather and Forecasting, 37, 2141–2159, https://doi.org/10.1175/WAF-D-22-0039.1.

FIORINO, M., and R. ELSBERRY, 1989: SOME ASPECTS OF VORTEX STRUCTURE RELATED TO TROPICAL CYCLONE MOTION. J Atmos Sci, 46, 975–990, https://doi.org/10.1175/1520-0469(1989)046<0975:SAOVSR>2.0.CO;2.
Fiorino, M., and R. L. Elsberry, 1989: Contributions to Tropical Cyclone Motion by Small, Medium and Large Scales in the Initial Vortex. Monthly Weather Review, 117, 721–727, https://doi.org/10.1175/1520-0493(1989)117<0721:CTTCMB>2.0.CO;2.

Magnusson, L., and Coauthors, 2019: ECMWF Activities for Improved Hurricane Forecasts. Bulletin of the American Meteorological Society, 100, 445–458, https://doi.org/10.1175/BAMS-D-18-0044.1.

Marchok, T., 2021: Important Factors in the Tracking of Tropical Cyclones in Operational Models. Journal of Applied Meteorology and Climatology, 60, 1265–1284, https://doi.org/10.1175/JAMC-D-20-0175.1.

Van der Grijn, G., J.-E. Paulsen, F. Lalaurette & M. Leutbecher, 2005: Early medium-range forecasts of tropical cyclones. ECMWF Newsletter No. 102, 7–14.

Vitart, F., J. L. Anderson, and W. F. Stern, 1997: Simulation of Interannual Variability of Tropical Storm Frequency in an Ensemble of GCM Integrations. Journal of Climate, 10, 745–760, https://doi.org/10.1175/1520-0442(1997)010<0745:SOIVOT>2.0.CO;2.


Sunday, May 14, 2023

big winds in the models

01B.2023 MOCHA


I just noticed the super Rapid Intensification (RI) of the first storm in the IO -- 01B.2023.

NB: I add the year of the season to the NNB ATCF ID because the 2023 SHEM season starts 1 July 2022. NN is the JTWC storm number and B is the subbasin id for the Bay of Bengal.

Here's a plot of the track from 01B.2023 origin as pTC (pre/potential TC) 91B from 2023050712-2023051012.  The first JTWC warning was issued on 2023051018 with an intensity of 30 kts -- what I define as genesis.  It is currently 14UTC 14 May 2023 (2023051414)...

01B.2023 MOCHA track 2023050712-2023051412.  The TC was a pTC (91B) from


The plot below comes from the site I set up for JTWC (and NHC) to visualize the 'diagnostic file' (input to the statistical-dynamical intensity prediction aids, e.g., SHIPS) https://jtdiag.wxmap2.com.  

The main purpose of the JTDIAG site was to move the model forecasts forward in time to the same time as the warning cycle, e.g., the 18 UTC 14 May 2023 (2023051418) warning will be issued around 20:30 UTC 14 May 2023 (+2:30 h after synoptic time).  The site also displays the sea-level pressure with a direct calculation of the ROCI (Radius of the Outermost Closed Isobar) from the lat/lon of the contour (a pretty slick GrADS trick).

What caught my attention was the 140 kt forecast of the Navy global model (NAVGEM)!  This is the first time I've seen such a big wind from a model tracker. 

plot of surface wind for the 72-h forecast from 2023051018

Focusing on the table with the max winds:


we see the 140 kts at tau 72 h for NAVGEM.

I've seen the ECMWF produce winds > super Typhoon (130 kts), but not other models.  Note that the GFS winds at 72 h are 102 kts...

Digging a little deeper, here is a plot of the operational trackers for 2023051018; the time of JTWC's first warning.  

NB: I am using the uninterpolated model trackers for HWRF/GFS/NAVGEM. The ECMWF forecast comes from local tracking using full-res ECMWF fields...

2023051018 model tracks for HWRF/GFS/NAVGEM/ECMWF

Both NAVGEM and HWRF correctly forecast the Rapid Intensification (RI).  The errors with GFS & ECMWF are not as good intensity wise -- slow and smaller -- but still forecasting the storm would become intense.

Going back 12 h...

2023051006 model tracks for HWRF/GFS/NAVGEM/ECWMF

NAVGEM made an even bigger forecast of 147 kts!!  And the 4-day HWRF forecast for both intensity and track are outstanding! Or as the British would say 'spot on.'

One of the benefits of this quick and dirty analysis is that I found some 'features' (i.e., bugs) in my codes (as Rosanne RosannaDana would say: "If it ain't one thing it's something else.").  

The original plots contained a 150 kt best track intensity for 2023051400 that was not consistent with the real-time intensity on other sites such as CIRA https://rammb-data.cira.colostate.edu/tc_realtime/index.asp. The reason for the difference is that the latest JTWC 'bdeck' (best track) has 150 kts for 2023051400 position and I had inadvertently used the bdeck (bio012023.dat) vice the 'adeck' (aio012023.dat) with the real-time positions used by the models (the so-called 'bogus' file)

01B.2023 is a remarkably TC especially in how well the models performed in directly forecasting the RI.

Other sites that might be of interest:

Comments and questions always welcome.











Wednesday, August 12, 2020

A Tropical Cyclone Forecast Metric for Operations and Model Development

prospectus for a WAF paper 

  • why metrics matter: "you're only as good as what you measure"
    • .gov & .mil set standards for operational forecast quality or goodness
      • GPRA for NWS - mean 48-h PE/IE
      • PACOM for .mil - mean 24,48,72,120 PE
    • model 'goodness' based on mean PE/IE statistics
  • TC forecast is...
    • surface (10 m) wind field
    • 2-D functional representation using Position (lat/lon), Intensity (Vmax) and Radii (R34/50/64) POCI/ROCI (pressure and radius of outermost closed isobar) and other parameters...
    • NWS (noaa.gov) & PACOM (.mil) warnings/advisories based on onset of 34 kt winds
  • Standard TC metrics:
    • PE - position error falsely called 'track' error
    • IE - intensity error; intensity defined by Vmax not Pmin
    • forecast taus 0,12,24,36,48,60,72,96,120
    • primary statistic is the mean
    • best track uncertainty:
      • P ~ 5-20 nmi depending on I (big I small P uncertainty)
      • I ~ 10-20 kts largest for small I and (ironically) very large I 
  • Properties of PE (and IE)
    • first and foremost NOT equivalent to NWP metrics like 5-day 500 mb NHEM AnomCorr (NAC)
      • time series of  5DNAC --> mean
        • continuous from a continuous process (the model)
        • # of cases the same for day 5, day 10, day...
      • time series of 24/72/120 h PE -- 2019 LANT for hwrf,avno,tecm5
        • discontinuous 
        • 2 or more PE at a given time (2 or more storms)
      • # of cases at each tau is different showed by histogram of R34 at tau 24/72/120
        • varies with basins
        • in the LANT only 1 of every 2 forecasts has a 72-h verifying position
      • show how 24-h PE using only 120 h storms != as 24-h PE using all possible
    • display means of both NAC and PE as die off curves
      • NAC dieoff can be differentiated PE dieoff cannot
      •  PE should be displayed for each tau separately!
      • # of storms / mean PE
    • apples v orange problem!!
    • the 'population' is season/basin dependent
    • year-to-year variability in season/basin mean implies the population cannot be well defined
    • serial correlation between forecasts reduces number of cases
      • e-folding time ~ 12-18 h or for forecasts every 6-h Nind~Nall/3
    • for every forecast...
      • number of verifying cases at:
        • tau 24 ~ 80% (short range)
        • tau 72 ~ 50% (medium range)
        • tau 120 ~ 30% (long range)
      • mean PE/IE represent a subset of storms
      • contribution by storm highly variable
  • the most important part of the forecast is track
    • 80% v 20%?
  • How to improve mean PE?
    • separate from IE
    • a model must make a 'good' track forecast 
      • to use the intensity? maybe...but physically intensity does depend on track
        • can be seen in ensembles -- need to make this plot again...

    • improve the process that generates the forecast -- the model -- why ECMWF is the best TC forecast model
    • reduce 'big' errors
      • do big errors happen within a storm or by storm?
  • Forecast Error (FE)
    • is not the same as PE or IE
    • must be related to the wind field (the forecast) and particularly the extent of 34 kt winds represented by the Radius of 34 kt winds (R34)
    • conceptually FE=f(PE,IE,R34)
      • in the early years (60-80s) Charlie Neumann defined FE=PE
      • or FE=a*PE + b*IE ; a=1.0 and b=0.0
  • New FE=f(PE,IE)
    • only require the forecaster (human or model) to predict position and intensity use best track (BT) R34
    • define error IKE (integrated kinetic energy, Powell...) as symmetric difference of two (intersecting) circles of R34
      • symmetric R34
      • statistical relationship between Vmax and Rmax for forecast & BT
    • use of this simplified IKE represents a lower limit in FE 
  • How
  • demonstrate how 'large' errors contribute to the season/basin mean
    • for every one 'bad' forecast it takes 5-10 'good' forecasts to compensate
    • one storm can dominate the mean at the medium/long range
  • demonstrate how storms contribute to the mean PE/IE and why a model failure for a single storm can 'ruin' the seasonal mean
  • analyze 'large' model errors for both PE and FE
    • are large PE errors always large FE?
    • how do large FE compare to PE

 

 

Thursday, January 30, 2020

CMC > NCEP 20200130 Update

CMC pulls ahead of the GFS? We're #4?

Update on 20200130: CMC better in Summer Hemisphere?

Mike Fiorino
30 January 2020

Recap of  the November Blog


In my first post:  https://wxmapstertc.blogspot.com/2019/11/cmc-ncep.html I discussed how the CMC GDPS (Global Deterministic Prediction System -- the Canadian model) was outperforming the NCEP GFS (the American model) in a very basic measure of global model forecast quality -- the 5-day northern Hemisphere (NHEM) height Anomaly Correlation (5DNACC). 

I acknowledged that a few points do not make a trend, but at the time hypothesized that the improvements in Canadian model came from new/better physics ...

I asked two questions:
  1. when/what were the apparent changes in the CMC GDPS (Global Deterministic Prediction System)?
  2. was the gap between the NCEP.GFS and the CMC.GDPS a blip or a trend?
Ron McTaggart-Cowen (ron.mctaggart-cowan@ec.gc.ca) answered #1: 3 July 2019.

And he provided these links describing the changes:
The most significant physics update to me involved the convective parameterization (adding momentum transport).  As a long-time tropical cyclone (TC) NWP modeler, the impact of better convection was not surprising and consistent with related changes at ECMWF in 2008 that lead to a dramatic improvement in their TC track forecasts (see:https://www.ecmwf.int/sites/default/files/elibrary/2009/17493-record-setting-performance-ecmwf-ifs-medium-range-tropical-cyclone-track-prediction.pdf)

 

Recent Trends...


Pete Kaplan's EMC Stat Page (the long-running web page used at all global model meetings at EMC during my time there in the 1990s) has been down for 'technical reasons'(?), but here are the latest stats from the EMC VSDB (https://www.emc.ncep.noaa.gov/gmb/STATS_vsdb/):



CMC is ahead of the GFS by 0.4 points (a point in NWP is a percentage point) in the NHEM but 1.2 in the SHEM.  A 1.0 point change is a pretty big deal, but again is not necessarily a trend.  What's more impressive  is how the delta is consistent and how the CMC.GDPS does not 'dropout' as badly as the GFS, e.g., the SHEM 11 JAN and NHEM 4 JAN dropouts.

The new CMC GDPS model has been running since July 2019 so we have 7 months to compare in the text table below:

               NHEM                   SHEM
         CMC    GFS  CMC-GFS    CMC    GFS  CMC-GFS
201906: .866   .876  -0.010     N/A  
201907: .863   .871  -0.008    .883   .889  -0.006
201908: .897   .890  +0.007    .882   .891  -0.009
201909: .876   .874  +0.002    .885   .891  -0.006 
201910: .898   .894  +0.004    .908   .899  +0.009
201911: .907   .911  -0.003    .914   .911  +0.003    
201912: .904   .911  -0.007    .903   .903  +0.000
202001: .913   .907  +0.004    .887   .875  +0.012 
 
What's most interesting is how the Canadian model does better in the summer hemisphere

We're not quite at the peak of austral TC activity (around February), but it looks like the convection changes are really improving the CMC height scores and confirms (IMHO) how the tropics impacts the quality of the midlatitude forecasts.

 

Some final thoughts...


Cliff Mass' recent blog post on US NWP (https://cliffmass.blogspot.com/2020/01/the-future-of-us-weather-prediction.html) makes a compelling argument that American operational NWP is fundamently broken with perhaps the greatest dysfunction (unstated there) in the US Navy 😟.  In the NHEM plot above, the Navy global model is barely competitive with the CFSR!  This is very distressing to me as a retired Naval Oceanography officer that implemented the first operational two-way interactive, moving nested-grid TC model in 1981...

I'm seeing a real trend here and at the rate we're going, American NWP will be 4th rate for the foreseeable...

While Cliff correctly identifies our problem mostly as one of leadership (I agree), the (much?) bigger in my mind is the data handling systems at NCEP NCO.  Using the unix filesystem to 'manage' gridded fields is appalling primitive and at least 30 years behind the rest of the world!  Although the Navy global model is clearly in last place, the data systems at FNMOC have always been top notch. 

Data handling is certainly not a sufficient condition, as demonstrated by the lack of performance by the Navy global model, but it is a necessary condition.  I cannot see American NWP at NCEP progressing without a fundamental change in the nitty gritty of data, a fully-funded and outside-DC EPIC notwithstanding.

The Usual Caveats


These comments/opinions are wholly mine and do not refect those of my current employer The University of Colorado Boulder or my previous employer NOAA.

Monday, November 25, 2019

CMC > NCEP?

CMC pulls ahead of the GFS? We're #4?

CMC Anomaly Correlation solidly #3 since August 2019...Waz Up Canada? Eh?

Mike Fiorino
20191125

The NWP league table

I casually follow the NWP scores at: EMC Stat Page (Pete Kaplan's long-running web page used at all global model meetings at EMC during my time there in the 1990s).

Here are the latest 5-d NHEM stats:

The  usual pecking order, or what I consider the 'league table' is:
  1. ECMWF
  2. UKMO
  3. NCEP (GFS)
  4. CMC
  5. NAVY 
i.e., we (the American Model) are typically #3. Since August CMC has pulled ahead of the GFS and as of today is 1 point higher.  My big question is how the CMC global model/data assimilation system has changed...

Taking a longer term view:

The color scheme: ECMWF; UKMO; CMC; Navy

ECMWF has been ichi ban for over 20 y...the MetOffice #2 and CMC almost always #4.

 

Is CMC the new #3?  How good is the ACC?

I appreciate that the anomaly correlation (ACC) does not measure the entire quality of an NWP modeling system...  However, I hold that the 5-d 500 mb NHEM ACC (5NACC) is a kind of 'magic number' -- highly correlated with skill in other areas/forecast times/variables as seen in the 'scorecard.'

For example, from the latest ECMWF implementation in June 2019:

https://www.ecmwf.int/sites/default/files/elibrary/2019/19156-newsletter-no-160-summer-2019.pdf

Again, while this single score only tells part of the story, general model skill does follow the 5NACC -- it's a necessary but not sufficient condition for model improvement.

The importance of this score in NWP was made very clear to me during my 1.5 year secondment to ECMWF 1998-99 to work on the ERA-40 reanalysis.  I developed a scheme to assimilate the tropical cyclone (TC) 'vitals' (working best track data on TC position, movement, intensity and other structure parameters).  I tested the scheme in the full ERA-40 version of the IFS and a 1point degradation in the 5NACC was the reason why TCs were not assimilated.  At the time, there was zero tolerance for any model change lowering the 5NACC (still true today?)...

 

Are TC forecasts consistent with the 5NACC?



I was curious if the apparent improvement implied by the 5NACC was reflected in TC track prediction...

The short answer: maybe in the atLANTic but not in Western north PACific.

Historically the CMC global model has not been a very good TC forecast aid and is known to have hyper-active tropical convection that causes excessive TC genesis especially in WPAC.  The same is true in EPAC.  The CMC model has shown some track prediction skill in the LANT relative to the GFS.

My standard score for TCs is the 72-h (3-d) mean position error (3MPE)because roughly 2/3 of all official forecasts will have a verifying 72-h position and beacause 3-d is about 1/2 of the mean life cycle of a TC.

In 2018 the LANT 3MPE:

  • CMC: 122 nmi (85 cases)
  • GFS: 105 nmi (85 cases) --  GFS lower (better) by 17 nmi
For the period 2019081500-2019112500 (today)
  • CMC: 128 nmi (73 cases)
  • GFS: 143 nmi (73 cases)  -- GFS higher (worse) by 15 nmi
 Here are the full position error plots (i.e., where the numbers above come from) :


2018 CMC v GFS LANT


2019081500-2019112500 CMC v GFS LANT
In WPAC the 3MPE for the same periods as in the LANT

In 2018 WPAC 3MPE:
  • CMC: 142 nmi (179 cases)
  • GFS: 119 nmi (179 cases) -- GFS lower (better) by 23 nmi
For the period 2019081500-2019112500 (today) in WPAC:
  • CMC: 171 nmi (89 cases)
  • GFS: 130 nmi (89 cases)  -- GFS lower (better) by 41 nmi

Some Bottom Lines:

I also read the area forecast discussion put out by WFO Denver/Boulder and have recently found references to the Canadian model as part of their prognostic reasoning.  Are the forecasters seeing the improved CMC model?

Why is the new GFS struggling to keep up with the NWP leaders in the UK; and now our friends to the North?

I could offer a few reasons, but to me the big one is that modern (> 2000) NWP is almost entirely a problem of physics, not dynamics (spatial resolution at the hydostatic limit is only significant in its interaction with physics).  Furthermore, observations and data assimilation only matter when the innovations are small (i.e., a good model that 'looks' like the obs).

At the end of the day it's (still) the model and that means physics.