My WebLink
|
Help
|
About
|
Sign Out
Home
Browse
Search
IntermtnClimateSummaryJan2008
CWCB
>
Drought Mitigation
>
DayForward
>
IntermtnClimateSummaryJan2008
Metadata
Thumbnails
Annotations
Entry Properties
Last modified
10/6/2011 3:42:27 PM
Creation date
1/29/2008 4:28:41 PM
Metadata
Fields
Template:
Drought Mitigation
Title
Intermountain West Climate Summary
Date
1/22/2008
Description
Water Availability Task Force Meeting Report
Basin
Statewide
Drought Mitigation - Doc Type
Reports
Document Relationships
IntermountainWestClimateSummary
(Attachment)
Path:
\Drought Mitigation\Backfile
IntermtnClimateSummaryJan2006
(Message)
Path:
\Drought Mitigation\DayForward
IntermtntClimateSummaryJune2006
(Attachment)
Path:
\Drought Mitigation\DayForward
There are no annotations on this page.
Document management portal powered by Laserfiche WebLink 9 © 1998-2015
Laserfiche.
All rights reserved.
/
20
PDF
Print
Pages to print
Enter page numbers and/or page ranges separated by commas. For example, 1,3,5-12.
After downloading, print the document using a PDF reader (e.g. Adobe Reader).
Show annotations
View images
View plain text
<br />INTERMOUNTAIN WEST CLIMATE SUMMARY, JANUARY 2008 <br /> <br />Observation <br />Yes <br />hits <br /> <br />Yes <br />Forecast <br />No <br /> <br />Total <br /> <br />misses <br /> <br /> <br />No <br />false alarms <br />correct negatives <br /> <br />Total <br /> <br />~tQi[7~{~J&i )~/@~ <br />~ <br />19{t~~ <br /> <br />CQ{Q)@i~rr\~/:~ [F\J@ <br /> <br />Figure 1 b. A contingency table shows what types of errors are being made. <br />A perfect forecasting system would only produce hits and correct negatives. <br /> <br />person verifying the forecast uses expert judgment to decide <br />what makes a forecast accurate. For example, a forecast for a <br />high temperature of 750F might be considered inaccurate either <br />when the observed high temperature was 760F or when the high <br />temperature was 850F. <br />The second common forecast verification measure is skill. <br />Skill is the accuracy of a forecast over a reference forecast. The <br />reference forecast might be random chance, persistence forecasts, <br />climatology, or even another forecast. A random chance forecast <br />would be like flipping a coin to decide whether or not to forecast <br />precipitation. Persistence forecast is forecasting the same condi- <br />tions that are happening at the time of the forecast. For example, <br />if it is currently snowing, a persistence forecast is for snow to <br />continue. A forecast of climatology is forecasting the average <br />conditions for the forecast period. A "skillful" forecast must <br />show improvement over a reference forecast. <br />Other measures of forecast quality besides accuracy and <br />skill include bias, resolution, and sharpness. Bias measures if <br />forecasts on average are too high or too low relative to the truth. <br />Resolution measures the ability of a series of forecasts to discrim- <br />inate between distinct types of events, even if the forecast itself is <br />wrong. Sharpness indicates if the forecasts can predict extreme <br />values. Sharpness is important because forecasters can some- <br />times achieve high skill scores by predicting average conditions <br />but in some cases the occurrence of extreme events may be more <br />important to users. In general, focusing on just one measure of <br />forecast quality may be misleading. For example, in the case of <br />Findley's forecasts, their apparent high accuracy obscured the <br />fact their skill was less than a constant forecast of no tornado. <br /> <br />Methods of Forecast Verification <br />Forecast verification methods are chosen depending on the <br />type of verification (accuracy or skill) and the type of forecast <br />(dichotomous, continuous, probabilistic, etc.). Examples of <br />verification methods range from simply "eyeballing" the fore- <br />cast compared to observations, to statistically and numerically <br />advanced methods. <br /> <br />Eyeballing a forecast is as simple as it sounds and can be use <br />for a variety of forecasts. A forecaster simply looks at the fore- <br />cast and the observations side by side to see how well they match <br />up (Figure la). "Eyeballing" verification is very subjective and <br />can lead to different outcomes depending on the judgment of the <br />indi vidual forecasters looking at the data. <br />A contingency table is typically used to verify dichotomous <br />forecasts, like the tornado example above, over a period of time. <br />The table shows the "yes" and "no" forecasts and observations <br />(Figure 1 b). To find the accuracy of the forecasts, one must sum <br />"hits" and "correct negatives" and divide by the "Total". This <br />will give a number between 0 and 1; the closer to 1, the more <br />accurate the forecast. This type of score can be very misleading <br />in rare events when forecasting "No" will lead to a high "correct <br />negatives" category such as the occurrence of tornados as in the <br />Findley Affair. Numbers in the contingency table can be com- <br />bined in many other ways than just accuracy. For example, the <br />False Alarm Ratio is the number of events that were forecasted to <br />occur but did not. <br />One can numerically verify or calculate the error between the <br />forecast and the observed values with the help of graphical repre- <br />sentations. Graphical displays, such as scatter or box-and-\vhis- <br />ker plots, are used to verify forecasts of continuous variables such <br />as maximum temperature over a period of days. Scatter plots <br />show the observed amount plotted against the forecast amount. <br />An accurate forecast in this case would lie along the diagonal of <br />the scatter plot. Box-and-whisker plots can show the distribution <br />of the observed values relative to the forecasted values, which <br />can provide a measure of the resolution of the forecast. In a well- <br />resolved forecast, the box plot of the forecast would appear to <br />have the same spread as the observed values. <br />Skill scores can be calculated for almost all types of forecasts, <br />but they are most often used for categorical and probabilistic <br />forecasts, like the seasonal climate outlooks issued by NOAA's <br />Climate Prediction Center (CPC) (see pages 13 and 14). All skill <br />scores measure the fraction of correct forecasts to total forecasts <br /> <br />FEATURE ARTICLE I 3 i~ <br />
The URL can be used to link to this page
Your browser does not support the video tag.