Laserfiche WebLink
<br />A-2 <br /> <br />purpos~s tq be met by our standards remain the same, but the means for meeting these stan- <br />dards may vary somewhat -. but not seriously -- from those appropriate for pure experimenta- <br />tion, as we shall notice from time to time. <br /> <br />2. Quality Data <br /> <br />* blindness of judgment * <br /> <br />The facts that (j) not every day, not even every rainy day, is a likely candidate for seed- <br />ing, and that (ij) the judgment of day quality still seems to have a large component of subjec- <br />tive judgment (as opposed to checking whether or not agreed-upon conditions are met) leave us <br />in an unusually difficult situation. Given that the judgment wit! be made in total honesty, we <br />know enough about less-than-conscious processes to be deeply concerned about possible <br />differences in judgment as to the suitability of a day (or situation) between instances when the <br />judgment-maker knows that this day, if judged suitable, will be seeded and those instances <br />when he knows that it will not be seeded. Furthermore, the principle that "justice must be <br />seen to be done" demands great care in avoiding any semblance that subjective judgments <br />might have contributed appreciably to the observed results (even small biases can be serious, <br />since it now seems reasonable to expect only small to moderate effects of seeding). <br /> <br />This is not an easy problem to attack. We cannot expect, yet, to go all the way toward <br />objectivity of decision. We cannot yet be explicit enough about the important aspects of a <br />situation. We must depend to a degree on expert judgment. But we must plan, carefully and <br />vigorously, both (a) to use objective, auditable criteria as far as possible, and (b) to keep from <br />anyone making the judgments the knowledge of whether this day (or situation), if declared <br />"suitable", will indeed be seeded. And we also must do this in a way that does not give a false <br />impression of distrust of individuals. Without experts performing at their highest skill, our <br />experiments are likely to be doomed to misleading failure. To be careful about subjective <br />aspects of decision should be viewed as reflecting deep respect for the proper role of the experts <br />-- as allowing them to perform at their highest skill, using all the available weather data, but <br />undlstraded by a knowledge of whether seeding is to happen or not if they make the decision of <br />suitability. <br /> <br />These situations where decisions of suitability are made in process, as for example by air- <br />borne scientists, raise the problem of blindness in its most difficult form. We regret to say that <br />any confirmatory study whose results are to be taken seriously MUST confront this issue. <br />Admittedly such judgments may enhance performance fairly as well as unfairly. Admittedly <br />objective criteria for in-process changes of suitability may not perform as well as expert's judg- <br />ment. The use of the objective criteria, however, is part of the price that must be paid if sup- <br />posedly confirmatory studies are to be taken seriously. <br /> <br />Like the other aspects of data quality to which we turn next, maintaining adequate blind- <br />ness is uncomfortable but very important. <br /> <br />* oversight and auditing * <br /> <br />Equal care is needed in the collection and handling of data. The historically common pat- <br />tern of "collect the data now, organize and scan it next winter (or after the last year of the <br />experiment)" cannot be commended and must be judged quite unsafe. Data should be scanned <br />for plausibility and completeness almost at once. At the same time it can and should be entered <br />into whatever archive or data management system is to be used. Gaps and implausible values <br />.should be the subject of immediate inquiry into causes -- and into steps that could be taken to <br />