Laserfiche WebLink
<br />INTERMOUNTAIN WEST CLIMATE SUMMARY, JANUARY 2008 <br /> <br />after correcting for the number of correct forecasts a reference <br />forecast - generally persistence, climatology or random chance <br />- \vould obtain. Three types of skill scores are the Heidke skill <br />score, the Brier skill score, and the Ranked Probability skill score. <br />A score between negative infinity to 1 is calculated, with 1 being <br />a perfect score. If forecasts are consistently better than the refer- <br />ence forecast, the score will be closer to 1, a score of 0 indicates <br />no improvement over the reference forecast, and a negative score <br />indicates the forecast performs worse than the reference forecast. <br />Note that perversely a high negative score may actually provide <br />considerable value if the forecast can be 'inverted'. For this <br />reason, substantial negative skill scores are rarely seen. When <br />comparing skill scores for different forecasts, it is important to <br />use the same method for all forecasts. For example, if you want <br />to compare the CPC seasonal forecast to Klaus Wolter's experi- <br />mental seasonal guidance, make sure you are looking at either the <br />Heidke or Brier skill score for both. <br /> <br />Forecast Value and Forecast Users <br />Another important attribute of forecasts is value. A forecast <br />might be highly accurate, skillful, unbiased, sharp and well <br />resolved and still not be very useful. A valuable forecast best <br />hel ps a decision maker. For example, a forecast of clear skies <br />over a desert is probably not very helpful. On the other hand, <br />if a forecast helps a decision maker to gain some benefit, the <br />forecast is considered valuable. Accurately forecasting a drought <br />will help water managers to better prepare for low water supply. <br />Forecasting the April 1 st snowpack as early as possible would <br />help improve the annual water management operations. In es- <br />sence, useful forecasts need a wide variety of attributes including <br />accuracy, skill and value. <br />NOAA is creating ways to educate decision makers and cre- <br /> <br />ate better consumers of forecasts. Making forecast verification <br />measures available and explaining the techniques to users will <br />increase the value of forecasts. For example, the Forecast Evalu- <br />ation Tool and the new verification tools on the NOAA National <br />Weather Service Western Water Supply Application Suite both <br />make verification tools readily available to users (see box). Users <br />will be able to decide which forecasts they want to use for what <br />purpose, and will know the weaknesses, strengths, or biases of <br />particular forecasts. For example, a certain forecast might tend to <br />predict wetter conditions in the spring. <br />Verifying a forecast should ultimately lead to improvement in <br />the forecasting techniques and an increase in value to the us- <br />ers. Overall, forecasters are starting to understand that they need <br />to think about who is using their forecasts and the value of the <br />forecast to the users, not just the skill score or the accuracy of <br />a forecast. While accuracy is very important, it is not the only <br />element of a good forecast. Whether a forecast is for weather, <br />climate, or streamflows, a user should know what information <br />the forecast provides, how the forecast is verified, and limitations <br />of the forecasts and verification methods. If users are educated <br />about forecasts and forecast verification, they will ultimately be <br />better consumers of those forecasts. <br /> <br />References <br />Murphy, A.H. 1996. The Finley Affair: A Signal Even in the <br />History of Forecast Verification. Weather and Forecasting. <br />11 (1): 3-20. <br />Third International Verification Methods Workshop (IVMW). <br />2007. Reading, UK. Available online: http://www.bom.gov. <br />au/bmrc. wefor/staff/eee/verif/verif_ \veb_page.html. <br /> <br />Forecast Verification Websites <br /> <br />Two online tools help make forecast verification techniques accessible and understandable to users: the Forecast <br />Evaluation Tool (FET) for NOAA/CPC seasonal climate outlooks and the NOAA National Weather Service (NWS) <br />Western Water Supply Application Suite for their water supply forecasts. <br /> <br />Forecast Evaluation Tool <br />FET is an online application to look at the successes of CPC seasonal climate forecasts by climate division, <br />season, and lead time of the forecast. Holly Hartmann, a scientist working for CLlMAS, a NOAA RISA program at <br />the University of Arizona, found that forecast users were hesitant to make decisions based upon forecasts without <br />knowing the track record of forecasts. She then initiated FET. In order to use FET, register for free at http://fet. <br />hwr.arizona.edu/ForecastEvaluationTool/. A tutorial is available at the web page. For more information about <br />FET, see the January 2006 Intermountain West Climate Summary. <br /> <br />NWS Western Water Su I A lication Suite <br />The NOAA/NWS Western Water Supply Application Suite launched in January 2008. This brand new tool allows <br />users to select a state, river, and station and then visualize data and also calculate error statistics and skill statis- <br />tics. The web page is available at: http://www.nwrfc.noaa.gov/westernwater/. To access the verification section, <br />when you get to the web page, first select "Change Application" and then select the "Verification" tab. At this <br />point, the regional data can be entered. More information is also available by selecting the "About Western Water <br />Supply" tab and then the "Verification" tab. <br /> <br />FEATURE ARTICLE I 4 i~ <br />