My WebLink
|
Help
|
About
|
Sign Out
Home
Browse
Search
FLOOD04413
CWCB
>
Floodplain Documents
>
Backfile
>
4001-5000
>
FLOOD04413
Metadata
Thumbnails
Annotations
Entry Properties
Last modified
1/25/2010 6:46:09 PM
Creation date
10/5/2006 12:37:36 AM
Metadata
Fields
Template:
Floodplain Documents
County
Statewide
Basin
Statewide
Title
Improving American River Flood Frequency Analysis
Date
1/1/1999
Prepared By
National Research Council
Floodplain - Doc Type
Educational/Technical/Reference Information
There are no annotations on this page.
Document management portal powered by Laserfiche WebLink 9 © 1998-2015
Laserfiche.
All rights reserved.
/
131
PDF
Print
Pages to print
Enter page numbers and/or page ranges separated by commas. For example, 1,3,5-12.
After downloading, print the document using a PDF reader (e.g. Adobe Reader).
Show annotations
View images
View plain text
<br />Censoring <br /> <br />- <br />42 <br /> <br />Improving American River Flood Frequency Analyses <br /> <br /> <br />Censoring below a threshold can be an effective way to account for the fact <br />that commonly assumed parametric distributions (such as the log-Pearson type III <br />distribution) may be inadequate to fit the "true" distribution at a given site. At the <br />very low end, the fact that use of annual flood data (i.e., the largest peak flow in a <br />year) can result in inclusion in the data set peak flows that are clearly not associated <br />with floods. Floods associated with distinctly different hydrometeorological <br />processes, such as hurricanes, convective storms, and rain-on-snow, can lead to <br />complex distributional shapes. In some cases, it is clear that certain mechanisms do <br />not produce large floods, and peak discharges associated with these mechanisms can <br />be separated in the analysis. (In the case of the American River, peak discharges at <br />late spring or early summer snowmelt events are excluded from the analysis.) It is <br />also possible to use mixture models in the analysis or highly parameterized <br />distributions (such as the Wakeby) that have complex shapes. These techniques <br />suffer from estimation problems caused by the large number of parameters. It may <br />be preferable to resort instead to methods for censoring the data set below some <br />threshold. Although censoring reduces the quantity of sample data, Monte Carlo <br />results indicate that censoring can actually improve estimation efficiency (Wang, <br />1997). The practice of low censoring effectively allows the analyst to place the <br />estimation focus where it belongs, on the upper tail of the distribution (NRC, 1988). <br />There are several approaches that can be used in estimation with data <br />censored below a given threshold. Non-parametric approaches avoid the assumption <br />of a specific distribution function. Parametric approaches are based on an assumed <br />distribution either for the entire population or for exceedances of a specified <br />threshold. <br />Non-parametric estimation methods, which typically use kernel-based <br />estimators of the density or quantile function, can be applied to estimation of the <br />upper tail ofa distribution (Moon and Lall, 1994). Particularly appropriate is the use <br />of kernel functions with bounded support, as only the data values falling within a <br />fmite range. of an estimated quantile have a bearing on the resulting estimate. <br />Breiman and Stone (1985) give a non-parametric method for tail modeling, which <br />essentially involves fitting a quadratic rnodel to the upper part of the data. Non- <br />parametric methods in general, and especially kernel-based methods, are often <br />criticized when they are used for extrapolation beyond the range of the data; but <br />extrapolation beyond the data poses problems for all methods of estimation. The <br />committee did not explore the application of non-parametric methods to the <br />American River data because such an approach would diverge significantly from the <br />Bulletin 17-B guidelines. <br />There are several estimation methods that can be applied to fit a chosen <br />distribution, such as the log-Pearson type III, to values exceeding a given threshold. <br />The method of maximum likelihood is efficient for many distributions (Leese, 1973; <br />Stedinger and Cohn, 1986), but it often has convergence problems for the log- <br />Pearson type Ill. Alternative methods include distributional truncation (see Durrans, <br />1996); partial probability weighted moments (Wang, 1990,1996; Kroll and Stedinger, <br />1996); probability plot regression (Kroll and Stedinger, 1996); LH-moments (Wang, <br />1997); and the Expected Moments Algorithm (Cohn et a!., 1997). The last method <br />
The URL can be used to link to this page
Your browser does not support the video tag.