<br />INTERMOUNTAIN WEST CLIMATE SUMMARY, JANUARY 2008
<br />
<br />Forecast Verification: Past, Present, and Future
<br />
<br />By Julie Malmberg, Western Water Assessment
<br />
<br />The goal of this article is to provide forecast users with a fra/nework for assessing the quality of any kind of forecast. Also to this
<br />end, WWA is co-sponsoring a workshop on Forecast Verification with NOAA s Colorado Basin River Forecast Center and NRCS
<br />on February 19th in Denver. The workshop will provide forecast users with the tools to evaluate the overall quality of the forecast.
<br />The workshop will emphasize water supply forecasts in the Western United States but the concepts will be applicable to clin1ate
<br />forecasts as well. Please contact Christina Alvordfor more information: christina.alvord@noaa.gov.
<br />
<br />Forecasts are issued by meteorologists, climatologists, and
<br />hydrologists to predict future weather, climate, and streamflows
<br />for a wide variety of purposes including saving lives, reducing
<br />damage to property and crops and even so people can decide
<br />what to wear in the morning. Forecast verification is how the
<br />quality, skill, and value of a forecast is assessed. The process of
<br />forecast verification compares the forecast against a correspond-
<br />ing observation of what actually occurred or an estimate of what
<br />occurred. This article discusses some of the many different fore-
<br />cast verification methods, the concept of forecast value to users,
<br />and offers some suggestions for forecast users when considering
<br />any forecast.
<br />
<br />Overview of Forecasts
<br />The three types of forecasts discussed here are weather,
<br />climate, and streamflow forecasts. Weather forecasts predict
<br />the weather that \vill occur during a short time frame from six
<br />hours to two weeks into the future. Clin1ate forecasts, also
<br />called climate outlooks, predict the average weather conditions
<br />for a season or period from several months to years in advance.
<br />Climate forecasts will do not predict the weather for a certain
<br />day, but predict the average weather over several days or months.
<br />Examples of climate forecasts from NOAA are on pages 13-14.
<br />Streamflow forecasts predict water supply conditions, including
<br />streamflow at a point or volume for a period, based upon vari-
<br />ables like precipitation and snowmelt. Streamflow forecasts can
<br />be daily or seasonal time scales. An example of a streamflow
<br />forecast map is on page 17.
<br />
<br />History of Forecast Verification
<br />In order to create better forecasts, forecasters monitor the fore-
<br />casts for accuracy and compare different forecasting techniques
<br />to see which is better and why (IVMW, 2007). Weather forecast-
<br />ing based upon interpreting weather maps began in the 1850s
<br />in the United States, but serious efforts in forecast verification
<br />began in the 1880s. In 1884, Sergeant John Finley of the U.S.
<br />Army Signal Corps began forecasting tornado occurrences for 18
<br />regions east of the Rocky Mountains. His forecasts were made
<br />
<br />twice a day and would be either "Tornado" or "No Tornado".
<br />This is an example of a dichotomous forecast, where there are
<br />only two possible choices. He reported a 95.6-98.6% accuracy
<br />for the first three months. However, other scientists pointed out
<br />that, ironically, he could have had 98.2% accuracy if he fore-
<br />casted "No Tornado" for all the regions and all the time periods.
<br />A 10-year debate started after Finley's publication, referred to
<br />as "The Finley Affair." This debate made forecasters realize the
<br />need for valid verification methods in order to improve forecasts,
<br />and led to the development of verification methods and practices
<br />(Murphy, 1996).
<br />
<br />Types of Verification
<br />In order for a forecast to be verified, it must be compared with
<br />some "truth." Observational data such as rain gauges, thermom-
<br />eters, stream gauges, satellite data, radar data, eyewitnesses, etc.
<br />are used as "truth." In many cases, however, it can be difficult to
<br />know the exact "truth" due to instrument error, sampling error,
<br />or observation errors. Accurate observations and observation
<br />systems, then, are critical to forecast verification.
<br />Forecasters and forecast users have many different ways to
<br />verify forecasts and assess quality. Two of the traditional ways
<br />are looking at the accuracy and the skill of the forecast. Ac-
<br />curacy is the degree to which the forecast corresponds to what
<br />actually happened (i.e. "truth" data) and depends on both the
<br />forecast itself and the accuracy of the measurement or observa-
<br />tion. As mentioned above, observation data can be a limitation
<br />in all verification measures, not just accuracy. In addition, the
<br />
<br />
<br />
<br />observed
<br />
<br />forecast
<br />
<br />Figure 1 a. Observed data versus forecast data (IVMW 2007).
<br />
<br />
<br />
<br />FEATURE ARTICLE I 2 i~
<br />
|