My WebLink
|
Help
|
About
|
Sign Out
Home
Browse
Search
9429
CWCB
>
UCREFRP
>
Copyright
>
9429
Metadata
Thumbnails
Annotations
Entry Properties
Last modified
7/14/2009 5:01:47 PM
Creation date
5/22/2009 7:19:35 PM
Metadata
Fields
Template:
UCREFRP
UCREFRP Catalog Number
9429
Author
Johnson, D. H.
Title
The Insignificance of Statistical Significance Testing
USFW Year
no dat
USFW - Doc Type
Journal of Wildlife Management
Copyright Material
YES
There are no annotations on this page.
Document management portal powered by Laserfiche WebLink 9 © 1998-2015
Laserfiche.
All rights reserved.
/
10
PDF
Print
Pages to print
Enter page numbers and/or page ranges separated by commas. For example, 1,3,5-12.
After downloading, print the document using a PDF reader (e.g. Adobe Reader).
Show annotations
View images
View plain text
A <br />_I <br />~f <br />IX <br />is <br />ai <br />µ0 ~ <br />LACK OF POWER <br />tx <br />Fig. ? . Results of a test that failed to reject the null hypothesis <br />that a mean µ equals µ,. Shaded areas indicate regions for <br />which hypothesis would be rejected. (Al suggests the null hy- <br />pothe=_is may well be false, but the sample was too small to <br />indicate significance: there is a lack of power. (B) suggests the <br />data ttviy were consistent with the null hypothesis. <br />(Petermau 1990, Thoulas and Krebs 19y i ). The <br />Procedure can be used to estimate the sample <br />size needed to have a specified probabilih~ <br />ipciwer = 1 - Q' of declaring as significant tat <br />the a leveLl ;i pat±icular difference or effect ;rf- <br />fect size)..~s stub. the process cau tisefilil.• be <br />used to desi~,n a mnev or e~i~frintent (Gerard <br />et al. 1905). [ts rise is sometimes realmniended <br />to .ucert,un tile, Petwer of the' test after a study <br />has been conchutecl and nonsirnincant results <br />elhtained tTbe ~~'ildlil'e Socieh~ 1995). The no- <br />tion is to gnarl sgainst wrourh• declaring the <br />null hypothesis tel be tnie. Such retrospective <br />Power nnalvsis can hr mislradiu~~, however. <br />Steidl et al. (199 ~ ? i-t) noted that penver esti- <br />mated ~~ith thr• data used to test the null hy- <br />pothesis and the ohsen'rd effect size is ]ne:in- <br />llhleSS, As ;1 hhh P-vahle v<11l invariahk result <br />in low estimated Power. Retrospective power <br />estimstes~ may br, meaningful if they are com- <br />Table 1. Reaction of investigator to results of a statistical sig- <br />nificance test {arier Nester 1996). ~ <br /> <br />PrrcKirrl im~~rrtrnrx tituutird <br />~ si~mitiern~r _ - <br />- ~ - - <br />of drsrn~ell diRrren~r- Not zi~mificant SiLmiFcant <br />Not import:uit Happy Aluloved <br />[mport:utt Vern sad Elated <br />puted with effect sizes different from the ob- <br />served effect size. Power analysis programs, <br />however, assume the input values for effect and <br />variance are known, rather than estimated, so <br />they give misleadingly high estimates of power <br />(Steidl et al. 199T, Gerard et al. 1998). In ad- <br />dition, although statistical hypothesis testing in- <br />vokes what I believe to be i rather arbitrary <br />parameter (a or P), power analysis requires 3 of <br />them (a, (3, effect size). For further comments <br />see Shaver (1993:309), who termed power anal- <br />ysis "a vacuous intellectual game," and who not- <br />ed that the tendency to use criteria, such as Co- <br />hen's (1988) standards for small, medium, and <br />large effect sizes, is as mindless as the practice <br />of using the a = 0.05 criterion in statistical sig- <br />nificance testing. Questions about the likely size <br />of tnle effects can be better addressed with <br />confidence intervals than \~ith retrospective <br />power anal~•ses (e.g., Steidl et a}. 199., Steiger <br />and Fouladi 199 i ).- <br />Biological Versus Statistical Significance <br />~fam~ authors make note of the distinction <br />behveen statistics} significance and subject-mat- <br />ter i_in otlr case, biological) siQrliiicanre. Unim- <br />portant fifferences or effects that do not attain <br />si~,nificance are okay. and iu~Petrtant differences <br />that ilo shu~.~~ up si~~rtli(icant are excellent, for <br />the~~ i~ac:iitate Puhiir;itirnl ~:Taolc 1i. L'ilimpor- <br />tant dil~Cere:ues that nu,l o,lt si~,nific;ult are an- <br />net~iu~_, and inuxntaut differences that fiiil sta- <br />hstu:d detectunl are tnli~ depressin<~. Recalling <br />onr •~urlirr couunents about the effect of sample <br />size on P-values, the 2 untcomes that please the <br />researe•Ilrr sti~~,est tht~ sample size ~v;Lti about <br />right (Table 3). The aunelving unimportant dif- <br />Table 2. Interpretation of sample size as related to results of <br />a statistical sigrtificance test. <br />~~~ urn-.1 i nt..n.~r~cc <br />~r.~usncai ~. -uiiir~:rnr,- <br />\,n .i~,miu,uu ~iwulic.rnr - <br />dot intpurt:utt n ukav n ton Iti~ <br />[ntport:utt n Corr small a uk;n~ <br />;~~,+, <br />LACK OF EFFECT <br />
The URL can be used to link to this page
Your browser does not support the video tag.