Laserfiche WebLink
<br />December 2003 <br /> <br />4.0 Ranking and Weighting Results <br /> <br />Due to time constraints, there was very limited discussion about the consequence table and the <br />trade-offs among options prior to completion of the questionnaire. As a result, caution should <br />be used in interpreting the results. It is likely that different participants interpreted the <br />options, the attributes and the attribute scores differently, leading to inconsistencies in <br />ranking and weighting. Further there was no opportunity for group discussion about the <br />significance of a shift in attribute scores across options for any given endpoint, nor any <br />opportunity for value-based discussion about the relative importance across endpoints. <br />Therefore, the TWG should consider this a preliminary exercise and the beginning, rather than <br />the conclusion, of a constructive deliberative process. The following summary is provided for <br />discussion purposes. <br /> <br />Figure 1 compares ranks assigned by the direct method and ranks assigned by the swing <br />weighting method for one example stakeholder. This format for presenting results is intended <br />to aid individual stakeholders in improving the thoroughness and consistency in their choices. <br />(With more time, each stakeholder would receive his/her individual results at the workshop.) <br />Options ranked the same by both methods fall on or near the 45 degree line. Options that fall <br />far from the 45 degree line should trigger a re-examination of that option by the stakeholder. <br />For example, from Figure 1 we see that stakeholder TWG 3's ranks are quite consistent across <br />the two methods for most options (differences in ranks of 1 or 2 places are not very <br />significant). However, Option 7: P + 8H8F Anytime ranked fairly low by the direct method, but <br />is ranked number one by the weighted method. On the other hand, Options 11 and 9 ranked <br />high by the direct method and low by swing weighting. While these discrepancies do not <br />necessarily mean that the direct rank is wrong, they may indicate any of a number of problems, <br />such as: <br />mixing up the options or misunderstanding the definition of the options in the direct <br />ranking (common when there are many options); <br />overlooking some elements of performance in the direct ranking (common when there are <br />many attributes); <br />overlooking options that are less controversial or less visible (reflecting a tendency to <br />spend more discussion time on options with either vocal champions or vocal opponents). <br /> <br />Alternatively the direct ranking may be a more accurate reflection of the stakeholder's values <br />if the attributes do not adequately capture all the important elements of performance (e.g., <br />missing attributes, hidden thresholds, competing unidentified hypotheses). The intent of the <br />multi-method approach is therefore not to say that one method is better than another, but to <br />expose inconsistenc.ies, clarify the rationale for choices, and improve the transparency and <br />accountability of decisions. <br /> <br />Across all stakeholders, options that frequently fell below the 45 degree line (while being <br />direct-ranked in the top six) were 9 and 11. Options frequently falling above the 45 degree line <br />(while being ranked in the top six by swing weights) included Options 7 and 6, and to a lesser <br />extent 5 and 10. <br /> <br />9 <br />GCDAMP MATA: <br />December 2003 Workshop Report <br /> <br />02292 <br />