<br />..
<br />
<br />6t
<br />
<br />following points have been extracted (Woodley et a1.
<br />1977 j Flueck, Woodley, and Jordan 1977).
<br />Data obtained prior to the change in flares give only
<br />weak and inconsistent evidence for treatment effects
<br />with some partitions suggesting rainfall increases due to
<br />seeding and others suggesting the opposite. Data ob-
<br />tained after the change in flares seem to give uniformly
<br />consistent evidence for rainfall increases (up to a factor
<br />of two or more) with strong statistical support. The
<br />evidence is strongest for 1976, when the greatest number
<br />of seeding planes and seeding flares were used.
<br />It has been reported that, on the basis of the results
<br />of FACE I, a confirmatory experiment (Phase II) is
<br />being planned.
<br />
<br />5. NON RANDOMIZED PROJECTS
<br />
<br />.
<br />
<br />Before closing, I want to call attention briefly to a
<br />class of very important weather modification problems
<br />(projects) where randomization is either impossible or .
<br />very difficult. With a number of principal investigators
<br />from other institutions, I have been studying for the past
<br />six years the effects of metropolitan St. Louis on nearby
<br />weather. This project is called METROMEX. For de-
<br />tailed references, see Changnon, Huff, and Semonin
<br />(1971) and Braham (1974).
<br />One might expect large industrialized cities to exert
<br />considerable influence on local weather. But how does
<br />one check such expectations? We have been trying to
<br />establish .physical cause-effect relationships between pos-
<br />sible causal agents and the processes through which they
<br />might act. We have no control at all on treatment. No
<br />randomization of treatment is possible; in fact, one of our
<br />aims is to identify the "treatment" agents.
<br />We are using simple sampling considerations to guide
<br />oUr measurement programs and simple statistics to de-
<br />scribe what we measure. We look for systematic differ-
<br />ences between measurements ma.de in different condi-
<br />tions. We play Monte Carlo games with data sets to gain
<br />insight into the likelihood that suggested relationships
<br />might have occurred through sampling biases. But we
<br />find ourselves coming up short of "proof" of urban
<br />weather effects except where we can establish a chain of
<br />cause-effect relationships, each link of which is capable
<br />of verification through direct observation.
<br />The problems of establishing "treatment" effects in
<br />inadvertent weather modification have close parallels in
<br />assessing the results of operational seeding projects. Re-
<br />sults from nonrandomized commercial operations have
<br />not weighed heavily in establishing the scientific status
<br />of weather modification, mainly because they are non-
<br />randomized. But now that some of the individual com-
<br />. mercial operations have gone on for more than 20 years,
<br />perhaps meteorologists and statisticians should take
<br />another look at them.
<br />
<br />6. ISSUES RAISED BY METEOROLOGISTS
<br />
<br />In this brief discussion I have tried to give a general
<br />view of several issues that have arisen in field programs
<br />
<br />;.~~
<br />
<br />, ,.L
<br />
<br />.1>
<br />
<br />r~i:. -',,:,. ~, _'_d.-
<br />
<br />, '~'~__~;":"~.__.~~':'-____,.~.~__.:___u_
<br />
<br />Journal (llf the American Statistical Association, March 1979
<br />
<br />in meteorology, as seen by a cloud physicist who has been
<br />in the field for several years. After accepting the invita.-
<br />tion t.o prepare this article, I contacted several meteoro-
<br />logists, who have recently been involved in field programs,
<br />to get a first-hand impression of their interaction with
<br />statistics and statisticians. From this background a num-
<br />ber oJr specific points have arisen which I would now like
<br />to put before the statistics community.
<br />1. The most frequently mentioned issue in my con-
<br />tacts with leading experimental meteorologists is the
<br />need for greater involvement of statisticians in mete-
<br />orolo(~ical projects. Not only are there too few statisticians
<br />who have worked extensively with meteorological data,
<br />but ~~mong these, even fewer have had training in
<br />physies, chemistry, or meteorology. As a result, com-
<br />munieation is obstructed, and understanding and mutual
<br />respeet are discouraged.
<br />What can we do to remedy this situation? I feel that
<br />large, stable, meteorological organizations should employ
<br />many more statisticians at project management levels.
<br />We should promote interdepartmental cooperation at
<br />universities having strong and progressive departments
<br />of statistics and meteorology. Joint degree programs are
<br />a possibility. We should expand the use of joint confer-
<br />ences. The Skyline Conference group could be reconvened
<br />to update its findings. The Committee on Probability
<br />and Statistics of the American Meteorological Society
<br />regula,rly sponsors national meetings on probability and
<br />statistics in the context of meteorology and would ap-
<br />preciate contact with meteorologically inclined statis-
<br />ticians. I also advocate requiring more statistics in the
<br />formal training of meteorologists, but I have not yet been
<br />persWl~ded that every meteorologist should be a stand-
<br />alone statistician.
<br />2. Another frequently voiced concern-not unrelated
<br />to the previous point-stems from experiences in seeking
<br />advice: on statistical issues. Most weather researchers
<br />recogIltize that among the many reasons why an ex peri-
<br />mentcan fail, not the least likely are the risks of false
<br />conclusions (or of no conclusion) because of improper
<br />sampling, unknown biases, chance events, etc. They
<br />admit the reality of these risks even though they may not
<br />apprec:iate their magnitudes or know how to minimize
<br />them. Most of the major experiments in cloud seeding
<br />have involved statisticians, especially in the experiment
<br />. design and data evaluation phases. But some of my col-
<br />leagues claim that it is frequently difficult to obtain con-
<br />sistent, understandable, and usable. advice on what they
<br />had believed were rather basic issues of experiment
<br />design and data evaluation. (Some of these are discussed
<br />separately later.) As scientists, we are accustomed to
<br />differences in opinion when the evidence is thin or con-
<br />cepts s.re new and untested. But in such cases it is most
<br />helpfUl for "experts" to develop their views in terms
<br />clear and simple enough to enable a project director, or a
<br />public-servant decision maker, to choose which side of a
<br />statistical issue is most con~pelling in his or her situation.
<br />3. Another pro~lem, loosely called one of creditability,
<br />
|