Statistical tests can only reject the null hypothesis, never prove it. However, when researchers test modeling assumptions, they often interpret the failure to reject a null of ‘no violation’ as evidence that the assumption holds. We discuss the statistical and conceptual problems with this approach. We show that equivalence/non-inferiority tests, while giving correct Type I error, have low power to rule out many violations that are practically significant. We suggest sensitivity analyses that may be more appropriate than hypothesis testing. Seeking evidence of absence: Reconsidering tests of model assumptions

Advertisements