<br />the progress made in ecological modelling in the past
<br />five years. Cooper et al. (1974, p. 23) recommend
<br />extensive use of simulation models in projects akin
<br />to SJEP and we support that recommendation. How-
<br />ever, it should be remembered that simulation models
<br />are no better than the data that go into them,
<br />either as estimates of parameters or as validation
<br />checks. Further, ecological models should not be
<br />expected to approach the precision of hydrologic
<br />simulations, if only because of the complexity of
<br />the systems they treat.
<br />
<br />Other Considerations
<br />
<br />Three other research strategy topics are briefly
<br />mentioned here. They are the use of standardized
<br />procedures, the extrapolation of research results,
<br />and the statistical interpretation of those results.
<br />
<br />-Standardized Procedures
<br />
<br />In the preparation of environmental impact state-
<br />ments, the standardized matrix procedure of Leopold
<br />et al. (1971) has received much attention in the
<br />past five years. It offers a way of evaluating the
<br />relative magnitudes of environmental impacts by
<br />matching biological organisms and communities to a
<br />list of possible impacts. Each cell of the matrix
<br />represents an expected effect which is then scored
<br />by the ~.gnitude of impact and/or the significance
<br />of the organism. The most susceptible organisms
<br />can then be identified by a summation of the scores.
<br />
<br />Cooper et al. (1974) point out that this approach is
<br />particularly useful in a small, well-defined project
<br />but is less easily applied to multidisciplinary
<br />research where a wide variety of interests and
<br />judgements are involved. A matrix evaluation was
<br />applied in a limited way as part of the definition
<br />of research needs during Phase I of SJEP. A matrix
<br />was designed for the project and some ecosystem
<br />components which later work showed to be susceptible
<br />to the effects of snowfall augmentation, were
<br />successfully identified in it. The procedure was
<br />not applied to the entire project but, in those
<br />parts where it was used, our experience shows it to
<br />have been useful.
<br />
<br />-Extrapolation of Results
<br />
<br />There are three reasons for extrapolating the
<br />research results presented elsewhere in this report.
<br />First, it may be necessary in the future to predict
<br />potential impacts of cloud seeding in other mountain
<br />ranges from SJEP results. Second, the characteris-
<br />tics of simulation models will often need to be
<br />estimated from observations made in areas other than
<br />that to which the model is applied. Third, there
<br />is the need to estimate impacts over an entire
<br />mountain range from results derived from small study
<br />areas within it. The last of these has been of
<br />direct concern in this project; the first two refer
<br />to the potential applications of the project results.
<br />
<br />The need for extrapolation is a strong reason for
<br />including error estimates with all substantive
<br />predictions of environmental impact and we have
<br />attempted to do this in Chapter IV of this report.
<br />These error estimates are, of course, based on data
<br />derived from study areas, but it is probably safe to
<br />assume that the errors involved in extrapolation
<br />will be at least as wide as those in the original
<br />analyses. In any case, it is essential that these
<br />estimates be made.
<br />
<br />'1.
<br />
<br />The studies made as part of SJEP have been conducted
<br />on spatial and temporal scales varying from broad-
<br />scale mapping of the entire San Juan Mountains to
<br />detailed plot studies of individual plants. We have
<br />found it difficult to draw these levels of inquiry
<br />together in a way which would allow an extrapolation
<br />of detailed results to impact evaluation for the
<br />entire target area of cloud seeding. The problem
<br />has not been treated explicitly in this report and
<br />is one that is clearly worth further study.
<br />
<br />One solution is to treat the problem of research site
<br />selection as a statistical one. This requires that
<br />study areas be selected so as to be representative
<br />of the target area for cloud seeding. This was
<br />attempted in SJEP but without complete success. As
<br />a result, the areas chosen for detailed field study
<br />in the alpine tundra ecosystem were later found to
<br />be atypical in some characteristics (Caine,
<br />Appendix A, this vol. p. 19~. If sufficient
<br />information were available at the start of a project
<br />like this one, it would be possible to identify
<br />representative sites statistically. In SJEP, the
<br />necessary information did not become available until
<br />late in the project's life, through the work of
<br />Andrews (this vol. p. 87) and Krebs (this vol.
<br />p. 81). This should not be interpreted as a
<br />criticism of SJEP since the overview projects were
<br />intended to give a post-facto evaluation only.
<br />
<br />An alternative solution is that which has been taken
<br />to rationalize the procedures used in SJEP. Since
<br />most work in the project has been concerned with
<br />ecological processes, it is probably sufficient if
<br />typical species and communities, especially ones
<br />that are well known or have been studied elsewhere,
<br />are included in the study, whether the area in which
<br />they are studied is typical or not. In fact, we
<br />feel that it is even more important that species,
<br />communities, and situations that are susceptible
<br />to environmental change be included. In this kind
<br />of decision, the experience of an ecologist should
<br />prove more efficient than the best of experimental
<br />designs.
<br />
<br />-Statistical Interpretation
<br />
<br />In reporting statistically tested conclusions, the
<br />SJEP studies generally follow accepted scientific
<br />convention in requiring rejection of a null
<br />hypothesis (H ) with some earlier defined risk of
<br />error (common~y, p = 0.05), The null hypotheses
<br />used in the individual reports of Chapter IV are
<br />normally those of "no difference between experimental
<br />and control situations" or "no correlation between
<br />snowpack and response variables." This approach to
<br />scientific analysis is unlikely to be criticized,
<br />except for the decisions to use prior probabilities,
<br />and the choice of critical rejection levels. These
<br />questions have been discussed frequently (e.g.
<br />Plutchik 1968, p. 112) and need not be considered
<br />here.
<br />
<br />I
<br />
<br />i
<br />
<br />However, when environmental impacts must be
<br />estimated, it is important to remember that'acceptance
<br />of a null hypothesis does not imply its proof. There
<br />is a tendency to interpret "no proven effect," i.e.
<br />acceptance of H , as if it were "proven no effect,"
<br />i.e. proof of HO. The latter statement would, of
<br />course, requireOthat errors in decision making be
<br />estimated on the other tail of the probability
<br />function used in statistical testing (i.e. p = 0.95,
<br />rather than p = 0.05). In an evaluation of the
<br />ecological impacts of snowpack augmentation, this
<br />
<br />30
<br />
<br />'?
<br />
|