Laserfiche WebLink
<br />A-9 <br /> <br />situations".) There seems to be no universally satisfactory solution, though we are able to sug- <br />gest some new compromises. <br /> <br />At one extreme, we might try to insist on having each "seed or not" decision made ran- <br />domly, independently of what went before, and made AFTER the suitability of that day has <br />been settled. This would avoid all difficulties with subjective judgment of suitability being <br />influenced by knowll~dge of consequences. It could also, however, allow long runs of "seed" <br />or of "don't seed", giving rise to doubts about the adequacy of balance betweem the <br />corresponding sets of results. <br /> <br />At the other extreme, we could decide to divide "suitable days" into consecutive pairs, <br />agreeing that one day in each pair will be seeded and the other not. Coupled with adequate <br />randomiiation of which day in each pair is seeded, this ensures the tightest balance consistent <br />with reliability. If, however, those responsible for judging suitability of what would be the <br />second day of a pair know what has already been done on the first day, they also are sure <br />whether or not the next day they declare suitable will in fact be seeded. Blindness will have <br />been completely lost for half the days. <br /> <br />Neither extreme seems acceptable for the future. But since no wholly reasonable <br />compromise seems to have been proposed earlier, our criticism of experiments already per- <br />formed has to be limited to taking careful note of the consequences of the choices made. <br /> <br />We return, in sections 15 and 20, to some possibilities of compromise. (One of those that <br />seems most attractive relies on restraining subjective decision-making by making at least half <br />the decisions in favor of suitability objectively.) <br /> <br />* focusing vs. dispersal * <br /> <br />One of the major issues in the design of both t~xploratory and confirmatory phases of <br />weather modification experiments is the degree to which attention is focused or dispersed. <br />With a given set of data, do we ask 1, 10, 100 or 1000 questions? 'The more we ask, the less we <br />learn about each! (We return to this point shortly.) Yet if we fail to ask the right one, we may <br />miss exactly the information we are seeking. <br /> <br />It might seem that asking as many as 1000 question is utterly unrealistic. But suppose <br />that there are 10 groups of rain gauges; 50 combinations, each of some of the 10, might well <br />define plausible alternative target areas, each of which may be thought to be of some interest <br />and to be an at least marginally plausible choice. Having 6 kinds of storms picked out by <br />different criteria (the: kinds will probably overlap, but will still differ) is not at all unusual. And <br />3 different ways to analyze the results is rather common. Now 50 times 6 times 3 gives 900 <br />combinations, bringing us 900 questions. <br /> <br />The problem of too many questions is most stringent in confirmatory phases, where we <br />are asking for confirmed evidence of what happened. As in any situation where variability is <br />large and important, the only kind of confirmation we can have is to be able to say "either <br />thus-and-so is true, or a very unlikely event has happened". So we must pay unusually careful <br />attention both to our definition of "a very unlikely event" and to our understanding of it. <br /> <br />If we were to ask 1000 worthless questions, worthless in that the true answer to each is <br />"nothing happened"', and if we adhere to the customary 95%-and-5% distinction between likely <br />events and unlikely ones, what would we expect -- and on the average get -- if we had indepen- <br />dent data relevant to each of the 1000 questions? If unlikely events happen 5% of the time, we <br />would expect them Ilo happen to 5% of the questions. That means the expected answers would <br />be 25 of "it went up", 950 of "no confidence that anything happened", and 25 of "it went <br />