Laserfiche WebLink
<br />I <br />I <br /> <br />Flood Frequency Estimates for the American River <br /> <br />43 <br /> <br /> <br />was developed explicitly for use with the log-Pearson type III distribution. <br />An approach sometimes applied to estimation with data censored at <br />relatively high levels is to choose a distribution appropriate for the upper tail of the <br />data. In some cases, there is theoretical support for the choice of distribution. For <br />example, if a random variable has a generalized extreme value distribution, then the <br />distribution of exceedances of a sufficiently high threshold is of the generalized <br />Pareto type (Pickands, 1975; Smith, 1985). Smith (1987,1989), Hosking and Wallis <br />(1987), and Rosjberg et al. (1992) have applied this result to flood frequency <br />analysis. <br />A fundamental question in censoring, for which there is little guidance, is the <br />choice of the censoring threshold. Several investigators have considered this issue <br />(pickands, 1975; Hill, 1975; Hall, 1982; Hall and Welsh, 1985), with the general <br />conclusion that the threshold level should depend on unkno""TI population properties <br />of the tail. Thus, these theoretical results are oflimited usefulness for small samples. <br />The use ofLH moments (Wang, 1997) renders unnecessary the choice of a censoring <br />threshold, but introduces in its place the need to choose the order of the LH moments <br />used. Kernel-based non-parametric estimators also eliminate the need to explicitly <br />choose a censoring threshold, but one is implicitly established based on the <br />bandwidth estimate. Further, one can argue that the bandwidth estimate should <br />depend on the quantile being estimated (T omic et aI., 1996), and this gives rise to a <br />non-unique censoring threshold when multiple quantiles are of interest. The net <br />effect of all this is that it is difficult to give any definitive guidance on the selection of <br />a censoring threshold. An investigator must use professional judgment to a <br />significant degree, though it is possible to obtain some guidance and insight through <br />investigations of physical causes of flooding at a site, studies to assess the <br />sensitivities of quantile estimates to the choice of censoring threshold, and <br />comparisons with nearby hydrologically similar sites. <br /> <br />Historical and Paleoflood Data <br /> <br />As discussed in Chapter 2, historical and paleoflood information represents a <br />censored sample because only the largest floods are recorded. The use and value of <br />such information in flood frequency analyses has been explored in several studies <br />(Leese, 1973; Condie and Lee, 1982; Hosking and Wallis, 1986; Hirsch and <br />Stedinger, 1987; Salas et aI., 1994; Cohn et aI., 1997). Research has confirmed the <br />value of historical and paleoflood information when properly employed (Jin and <br />Stedinger 1989). In particular, Stedinger and Cohn (1986) and Cohn and Stedinger <br />(1987) have considered a wide range of cases using the effective record length and <br />average gain to describe the value of historical information. In general, the weighted <br />moments estimator included in Bulletin 17-B is not particularly effective at utilizing <br />historical information (Stedinger and Cohn, 1986; Lane, 1987). <br />Maximum Likelihood Estimation (MLE) procedures can be used to integrate <br />systematic, historical, and paleoflood information (Stedinger et aI., 1993). Ostenaa et <br />al. (1996) use a Bayesian approach to extend standard MLE procedures. This <br />extension better represents the uncertainly in the various sources of information. The <br />previously mentioned Expected Moments Algorithm of Cohn et al. (1997) can also <br /> <br />- <br />