My WebLink
|
Help
|
About
|
Sign Out
Home
Browse
Search
FLOOD04413
CWCB
>
Floodplain Documents
>
Backfile
>
4001-5000
>
FLOOD04413
Metadata
Thumbnails
Annotations
Entry Properties
Last modified
1/25/2010 6:46:09 PM
Creation date
10/5/2006 12:37:36 AM
Metadata
Fields
Template:
Floodplain Documents
County
Statewide
Basin
Statewide
Title
Improving American River Flood Frequency Analysis
Date
1/1/1999
Prepared By
National Research Council
Floodplain - Doc Type
Educational/Technical/Reference Information
There are no annotations on this page.
Document management portal powered by Laserfiche WebLink 9 © 1998-2015
Laserfiche.
All rights reserved.
/
131
PDF
Print
Pages to print
Enter page numbers and/or page ranges separated by commas. For example, 1,3,5-12.
After downloading, print the document using a PDF reader (e.g. Adobe Reader).
Show annotations
View images
View plain text
<br />I <br />I <br /> <br />Flood Frequency Estimates for the American River <br /> <br />43 <br /> <br /> <br />was developed explicitly for use with the log-Pearson type III distribution. <br />An approach sometimes applied to estimation with data censored at <br />relatively high levels is to choose a distribution appropriate for the upper tail of the <br />data. In some cases, there is theoretical support for the choice of distribution. For <br />example, if a random variable has a generalized extreme value distribution, then the <br />distribution of exceedances of a sufficiently high threshold is of the generalized <br />Pareto type (Pickands, 1975; Smith, 1985). Smith (1987,1989), Hosking and Wallis <br />(1987), and Rosjberg et al. (1992) have applied this result to flood frequency <br />analysis. <br />A fundamental question in censoring, for which there is little guidance, is the <br />choice of the censoring threshold. Several investigators have considered this issue <br />(pickands, 1975; Hill, 1975; Hall, 1982; Hall and Welsh, 1985), with the general <br />conclusion that the threshold level should depend on unkno""TI population properties <br />of the tail. Thus, these theoretical results are oflimited usefulness for small samples. <br />The use ofLH moments (Wang, 1997) renders unnecessary the choice of a censoring <br />threshold, but introduces in its place the need to choose the order of the LH moments <br />used. Kernel-based non-parametric estimators also eliminate the need to explicitly <br />choose a censoring threshold, but one is implicitly established based on the <br />bandwidth estimate. Further, one can argue that the bandwidth estimate should <br />depend on the quantile being estimated (T omic et aI., 1996), and this gives rise to a <br />non-unique censoring threshold when multiple quantiles are of interest. The net <br />effect of all this is that it is difficult to give any definitive guidance on the selection of <br />a censoring threshold. An investigator must use professional judgment to a <br />significant degree, though it is possible to obtain some guidance and insight through <br />investigations of physical causes of flooding at a site, studies to assess the <br />sensitivities of quantile estimates to the choice of censoring threshold, and <br />comparisons with nearby hydrologically similar sites. <br /> <br />Historical and Paleoflood Data <br /> <br />As discussed in Chapter 2, historical and paleoflood information represents a <br />censored sample because only the largest floods are recorded. The use and value of <br />such information in flood frequency analyses has been explored in several studies <br />(Leese, 1973; Condie and Lee, 1982; Hosking and Wallis, 1986; Hirsch and <br />Stedinger, 1987; Salas et aI., 1994; Cohn et aI., 1997). Research has confirmed the <br />value of historical and paleoflood information when properly employed (Jin and <br />Stedinger 1989). In particular, Stedinger and Cohn (1986) and Cohn and Stedinger <br />(1987) have considered a wide range of cases using the effective record length and <br />average gain to describe the value of historical information. In general, the weighted <br />moments estimator included in Bulletin 17-B is not particularly effective at utilizing <br />historical information (Stedinger and Cohn, 1986; Lane, 1987). <br />Maximum Likelihood Estimation (MLE) procedures can be used to integrate <br />systematic, historical, and paleoflood information (Stedinger et aI., 1993). Ostenaa et <br />al. (1996) use a Bayesian approach to extend standard MLE procedures. This <br />extension better represents the uncertainly in the various sources of information. The <br />previously mentioned Expected Moments Algorithm of Cohn et al. (1997) can also <br /> <br />- <br />
The URL can be used to link to this page
Your browser does not support the video tag.